Resources & CLE
- AI-Powered Legal Aid Initiatives
- Case Study: AI Chatbots in Court Self-Help Centers
- Research: Impact of AI on Pro Bono Services
- Guide: Implementing AI in Legal Aid Organizations
CLE Resources
- Online Courses
- Webinars
- Certification Programs
AI and Access to Justice: Resources for Texas Attorneys
Overview: The following is a curated list of high-quality, practical resources on how artificial intelligence (AI) is being used to improve Access to Justice (A2J). It includes educational materials (guides, reports, research) to build your understanding, real-world case studies of AI in legal aid and pro bono, best practice guidelines for implementation, examples of AI-driven tools for common legal aid tasks, and ethical considerations specific to A2J. All sources are from credible organizations (academic institutions, bar associations, nonprofits, and reputable legal tech entities). This guide is structured with clear headings and bullet points for easy scanning by busy Texas attorneys.
Educational Materials and Reports on AI & Access to Justice
- “Generative AI and Legal Aid: Results from a Field Study…” (2024) – Colleen V. Chien & Miriam Kim. This law review article (Loyola L.A. Law Review) documents the first field study of legal aid lawyers using generative AI. Ninety-one attorneys were given access to AI tools (like GPT-based assistants) for up to two months. Key findings: 90% reported increased productivity and 75% planned to continue using AI ( "Generative AI and Legal Aid: Results from a Field Study and 100 Use Ca" by Colleen V. Chien and Miriam Kim ) ( "Generative AI and Legal Aid: Results from a Field Study and 100 Use Ca" by Colleen V. Chien and Miriam Kim ). Participants managed risk by focusing on lower-stakes tasks (summarizing documents, preliminary research, drafting form letters, translating legal jargon into plain language) ( "Generative AI and Legal Aid: Results from a Field Study and 100 Use Ca" by Colleen V. Chien and Miriam Kim ). Notably, attorneys who received extra training and support (“AI concierge” guidance) had significantly better outcomes in adoption and effectiveness (Talk Justice, an LSC Podcast: Results from the First Field Study of AI for Legal Aid | LSC - Legal Services Corporation: America's Partner for Equal Justice) (Talk Justice, an LSC Podcast: Results from the First Field Study of AI for Legal Aid | LSC - Legal Services Corporation: America's Partner for Equal Justice). The authors suggest viewing AI as an augmenting tool (not a replacement) for lawyers and provide a database of 100 AI use cases with sample prompts/output for legal aid settings ( "Generative AI and Legal Aid: Results from a Field Study and 100 Use Ca" by Colleen V. Chien and Miriam Kim ) (Talk Justice, an LSC Podcast: Results from the First Field Study of AI for Legal Aid | LSC - Legal Services Corporation: America's Partner for Equal Justice). (This study is a rich resource for evidence-based insights on incorporating AI in legal services.)
- ABA Formal Opinion 512 on Generative AI (July 2024) – The American Bar Association issued ethics guidance on lawyers’ use of generative AI (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). It emphasizes that attorneys must maintain competence with these tools, which includes understanding AI’s limitations and staying updated on technology (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). Lawyers must ensure the accuracy and reliability of AI outputs through oversight and cannot delegate responsibility to an algorithm. The opinion also underscores confidentiality – avoid inputting sensitive client data into AI tools without safeguards – and the importance of transparency with clients about AI use (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). It advises supervisory lawyers to set policies for AI use in their firms and train staff on ethical use (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). (Texas attorneys should be aware of this national standard when evaluating AI tools.)
- State Bar Task Force Reports (Texas & Others) – Several state bar associations have studied AI’s impact on legal practice with a focus on A2J:
- Texas: The State Bar of Texas’s Taskforce for Responsible AI in the Law (TRAIL) released an Interim Report (late 2023) highlighting AI’s potential to enhance access to justice for low-income people. It recommends supporting legal aid and pro bono providers in adopting AI technologies, while also calling for robust ethical guidelines and data security measures (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). The report encourages expanding AI education (CLE programs) for lawyers and even exploring new laws or “sandboxes” to facilitate responsible innovation (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute).
- Illinois: A 2023 report by the Illinois State Bar’s AI Task Force noted AI can increase efficiency and reduce costs to make legal services more accessible. It provocatively suggested that law firms may need to move away from the billable-hour model (which disincentivizes efficiency gains from AI) toward value-based fees so that cost savings from AI actually benefit clients (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute).
- New York: The NY State Bar’s Task Force on AI (Apr 2024) urged clear guidelines for AI use, stressing ethics (competence, transparency, confidentiality, supervision) in adoption (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). It specifically acknowledged generative AI’s promise for pro bono: helping with client intake, legal research, document prep, and language translation to better serve underserved communities (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). It also warned of accuracy issues and high implementation costs that legal aid groups must manage to avoid widening the justice gap.
- Minnesota: The Minnesota State Bar (June 2024) examined LLMs, Unauthorized Practice of Law, and A2J. The working group’s report noted that tools like ChatGPT could help self-represented litigants fill forms, retrieve legal info, and get explanations in plain language (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). To balance innovation with protection, they recommend an “Access to Justice legal sandbox” – a safe regulatory space to pilot AI legal services under oversight, ensuring they help the public without running afoul of UPL rules (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute).
- Stanford Legal Design Lab – AI & Access to Justice Initiative – Stanford Law’s Legal Design Lab is running an ongoing AI+A2J Initiative that produces research and practical tools. Notable resources include:
- AI for Legal Help “Design Workbook” – a guide to planning AI pilots in legal services (AI & Access to Justice Initiative – Justice Innovation) (Design Workbook for Legal Help AI Pilots – Justice Innovation). It walks users through scoping an AI project: identifying use cases and workflows, deciding which legal tasks AI should or shouldn’t do, mapping user personas (who will use or be affected by the tool), data and training needs, and brainstorming risks (like bias or bad legal advice) with mitigation plans (Design Workbook for Legal Help AI Pilots – Justice Innovation) (Design Workbook for Legal Help AI Pilots – Justice Innovation). It also suggests benchmarks for success (accuracy, efficiency vs. current processes) and next steps for prototyping and evaluation (Design Workbook for Legal Help AI Pilots – Justice Innovation) (Design Workbook for Legal Help AI Pilots – Justice Innovation). This workbook reflects a responsible design approach in three stages: (1) Design & policy research, (2) Tech prototyping with evaluation, and (3) Careful piloting in the field (Design Workbook for Legal Help AI Pilots – Justice Innovation). (A great planning resource if you’re considering building or adopting an AI tool for legal aid.)
- Research Summaries & Courses – The initiative shares findings from user-testing AI legal advice, hosts an AI and Legal Help class (with public reports), and runs a monthly AI & A2J Research x Practice seminar bringing together scholars and practitioners (AI & Access to Justice Initiative – Justice Innovation). Check their website for a reading list of publications on AI in A2J (AI & Access to Justice Initiative – Justice Innovation) (AI & Access to Justice Initiative – Justice Innovation) – it includes academic papers and reports by various groups.
- Access to Justice Tech Summit Materials – Legal innovation conferences often provide useful material:
- For instance, the LSC (Legal Services Corporation) “AI for Legal Aid” Summit (2023) gathered 300+ legal aid staff from 47 states to discuss AI opportunities (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). Use cases presented there showed how generative AI can improve efficiency in legal aid offices (e.g., reading large case files or batches of contracts in minutes) while cautioning that human review remains essential (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). Summaries of the summit (such as a Thomson Reuters Institute article) highlight “a generational opportunity to close the justice gap” with AI if used wisely (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). Topics covered included speeding up routine paperwork, automating parts of intake, and even back-office uses like grant writing and HR support (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute).
- Tip: Watch for recorded webinars or reports from organizations like LSC, the ABA Center for Innovation, and state Access to Justice commissions. For example, the ABA Center’s Task Force on Law & AI has an “AI and Access to Justice” issues page noting that AI could dramatically improve legal aid efficiency and provide legal info to the public – but only if accuracy, privacy, and bias issues are addressed (Highlight of the Issues).
Case Studies: AI in Action for Legal Aid and Pro Bono
Concrete examples help illustrate how AI tools are being leveraged by legal services organizations (LSOs) and pro bono programs. Here are several case studies from different contexts:
- Automating Expungement Petitions (Tennessee) – The Legal Aid Society of Middle Tennessee and the Cumberlands developed an AI-assisted workflow to clear criminal records for eligible clients. They used a generative AI (ChatGPT) to read through anonymized criminal history data and flag which charges were eligible for expungement, outputting the results into a spreadsheet (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). Those results fed into a document automation system to generate the expungement petitions automatically (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). The impact was striking: at one clinic, the AI helped the team expunge 324 charges for 98 people in a single day, a volume of work that “would have taken much longer without automation.” (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). This dramatically improved the speed of restoring clients’ rights (important for employment and housing opportunities) (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). Attorneys were initially hesitant about AI, but the project showed that automation can free up lawyers’ time for client interaction while the machine handles tedious form-filling (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). Crucially, the petitions were still reviewed by attorneys before filing, preserving quality control (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). (Source: Case study via Thomson Reuters Institute (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute) (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute).)
- “LIA” Legal Information Assistant Chatbot (North Carolina) – Legal Aid of North Carolina’s Innovation Lab created an AI chatbot named LIA to provide 24/7 legal information to the public (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). Available on their website, LIA answers common questions on topics like housing, family law, and consumer rights in plain language (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). The goal is to assist people who may not get to a lawyer, especially in rural areas or during off-hours (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). LIA was built in partnership with a legal tech company (LawDroid) and trained on the organization’s own legal knowledge base to ensure accurate, jurisdiction-specific answers (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). The team undertook extensive testing with law students and actual clients before launch (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). Within five months of launch, their “Get Help” web page saw over 95,000 views, including 20,000+ interactions with housing resources via the chatbot (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). The chatbot handles routine queries (“What can I do if my landlord won’t make repairs?”), which reduces the burden on intake staff, allowing human lawyers to focus on more complex cases (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute) (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). It’s continually updated and monitored by staff, following a human-in-the-loop approach to maintain quality (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). (Source: Case profile in Thomson Reuters (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute) (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute).)
- AI-Assisted Case Review for Innocence Claims (California) – The California Innocence Project (based at San Diego) deals with large volumes of case files from inmates claiming wrongful conviction. They adopted an AI-powered document analysis tool to speed up case reviews (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). Previously, attorneys and students manually sifted through boxes of trial transcripts and evidence (often thousands of pages). With the AI tool, they can upload a lengthy case file and get an automatic summary or outline of its contents (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). More impressively, the AI can answer specific questions about the case file. For example, attorneys can ask, “What was the defendant’s age at the time of trial and was there any evidence of intellectual disability?” – and the system will search the text and provide the answer, even flagging inconsistencies in testimony (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). This acts like a research assistant that works much faster than humans. In one instance, the team asked the AI to suggest lines of questioning for an arson expert, and it generated a solid starting list in seconds (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). All outputs are reviewed by attorneys (the project emphasizes that “lawyers should double-check every response” the AI gives (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute)). The AI has cut days or weeks off the triage phase for innocence cases. Leaders of the project (now at a spin-off called The Innocence Center) estimate that if they’d had this technology earlier, they could have freed one client a decade sooner than happened in reality (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). This case shows how AI can comb through complex records to spot important details, giving attorneys more bandwidth to pursue justice for clients. (Sources: TR Institute article (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute); follow-up commentary (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute).)
- Tenant Advocacy Chatbot and Staff Assistant (New York) – Housing Court Answers (HCA), a NYC nonprofit that assists unrepresented tenants, collaborated with technologists from NYU Law and Cornell to deploy two AI tools for housing justice (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute):
- An internal AI assistant for HCA staff: This tool allows hotline workers and desk advocates to type in a tenant’s question and instantly search a curated database of academic legal resources and FAQs. It then provides guided answers or relevant info to help the staff give advice (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). This is critical in a high-pressure environment (a crowded courthouse or hotline) where staff need to respond fast. By simplifying access to the organization’s trove of knowledge, it helps even new staff handle questions with confidence (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute).
- A public-facing FAQ chatbot on HCA’s website: This is a more limited tool that tenants can use on their own to get answers to basic questions about eviction, repairs, and housing law (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). It’s designed to be cautious – sticking to common questions – to avoid giving dangerous advice, but it offers immediate information and self-help resources to tenants who can’t reach a live advocate (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute).
- Development process: The team followed a careful, four-step process to build these tools (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). First, they compiled a corpus of relevant housing law materials (statutes, guides, expert-written content). Second, they trained and tested the AI (a large language model) with human-in-the-loop moderation – staff and experts reviewed the AI’s answers and gave feedback to improve accuracy (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). Third, they launched the internal and external tools for real users. Finally, they monitored usage data and user questions to continually refine the system and also to gather insights on what problems tenants face most (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). An HCA executive noted the data from the chatbot is “gold for advocacy” – by seeing common issues and confusion points, they can push for policy changes (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). Importantly, HCA kept people at the center of this project: from having humans review AI outputs to ongoing testing and iteration with user feedback (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). Executive Director Jenny Laurie said staff are excited that the AI can handle basic questions, freeing humans to tackle complex cases – and even serve as a training aid for new advocates learning NYC’s complicated housing laws (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). (Source: TR Institute case study (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute).)
- Texas Example – Lone Star Legal Aid’s AI Initiatives (Houston) – Lone Star Legal Aid (LSLA), one of the major legal service providers in Texas, is actively developing AI tools to boost A2J:
- Legal Aid Content Intelligence (LACI): an in-house AI-driven system to keep legal information up-to-date (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid). LACI monitors changes in laws and regulations and helps automate updates to the organization’s self-help materials and online content (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid) (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid). (This addresses a common challenge: ensuring forms and legal info are current without constant manual review.)
- AI Chatbots: LSLA is creating three chatbots (with support from LSC grants) (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid):
- “Juris” Chatbot – for legal research/Q&A: This bot will analyze legal texts (cases, statutes, etc.) and provide users with accurate, cited responses (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid). In other words, it’s like an AI law librarian that not only answers questions but also gives the source for the answer – crucial for trust and for attorney use.
- “LSLAsks” – internal help desk bot: Aimed at staff, this bot will answer internal questions (like HR policies, IT support, or office procedures) (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid). By instantly handling common staff queries, it lets staff focus on clients instead of administrative tasks.
- “Navi” – client support chatbot: This will guide the public through legal issues by asking the user questions and directing them to resources or referrals (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid). Essentially a triage and navigation tool, it will help users figure out what help they need (for example, determining if someone has a simple issue that a form letter might solve, or if they need full representation, then referring accordingly).
- LSLA is using OpenAI’s technology as a backbone and aims to set “new legal aid chatbot development standards” with these projects (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid). The effort is open-source-minded (they plan to share tools like LACI with other programs) (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid). LSLA’s managing attorney noted these AI tools “will empower low-income Texans to better navigate legal challenges, and hopefully lay the foundation for a broader movement in legal aid innovation.” (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid) (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid). (Source: LSLA press release, Jan. 2025 (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid) (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid).)
- Other Noteworthy Examples:
- Foster youth assistance: A project called FosterPower (profiled by Thomson Reuters) uses tech, including AI, to help foster youth understand their legal rights and resources. It’s a niche A2J application aimed at an often-overlooked population.
- Online dispute resolution & courts: Some courts are experimenting with AI (or at least automated decision trees) to help self-represented litigants with small claims or traffic matters. For example, Michigan’s MI-Resolve platform (not AI per se, but relevant to tech and A2J) or British Columbia’s Solution Explorer for housing disputes. (These aren’t pure AI, but as background, they show a trend of tech-facilitated A2J which AI could enhance with natural language capabilities.)
- Law School Initiatives: Law labs (like Suffolk’s LIT Lab or Georgetown’s Iron Tech Lawyer) have produced AI-infused A2J apps in hackathons. E.g., a “Rentervention” chatbot in Chicago helps tenants with eviction defense by generating legal documents and advice scripts (using guided logic and some NLP). These student projects can inspire tools that legal aid groups pick up.
Best Practices for Implementing AI in A2J Settings
When integrating AI into legal aid or pro bono work, it’s vital to do so thoughtfully. Below are best practices distilled from the literature and case studies:
- Start with Clear, Narrow Use Cases: Don’t try to “AI-enable” everything at once. Identify specific pain points or repetitive tasks in your workflow where AI might help. Good starting points are low-risk, supportive tasks – for example, summarizing large documents, converting legal jargon to plain language, or drafting a form letter that a lawyer will finalize ( "Generative AI and Legal Aid: Results from a Field Study and 100 Use Ca" by Colleen V. Chien and Miriam Kim ). The field study of legal aid lawyers found they gravitated toward these kinds of uses first, which carry less risk if the AI errs ( "Generative AI and Legal Aid: Results from a Field Study and 100 Use Ca" by Colleen V. Chien and Miriam Kim ) ( "Generative AI and Legal Aid: Results from a Field Study and 100 Use Ca" by Colleen V. Chien and Miriam Kim ). Solve one problem at a time: maybe begin with an internal tool to speed up research on common questions, or an intake triage bot for a specific legal issue. Pilot one use case, evaluate, and then expand. This incremental approach treats AI adoption as a journey of continuous learning (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute), rather than a single big rollout.
- Choose Reliable, Legal-Specific Tools: The quality of AI outputs depends on the quality of the tool and its data. When possible, opt for professional-grade AI solutions grounded in legal data (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). This might mean using AI services built for law (for example, an AI trained on vetted legal texts or one offered by a reputable legal research company) rather than a generic public chatbot for anything mission-critical. Tools that can cite sources for their answers are especially valuable for legal work – they let you verify and trust the information. In practice, some legal aid orgs have partnered with tech companies or academics to build custom tools (like HCA did with Josef and NYU, or NC Legal Aid with LawDroid) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). If building your own, use your organization’s existing content (manuals, help articles) to train the AI so it’s accurate for your jurisdiction (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). If buying a product, ask vendors about the model’s training data, accuracy rates, and whether it’s been tested for bias on legal queries. Security is part of “reliable” too – ensure the tool will protect client confidentiality (see Ethics section). Finally, keep the user experience in mind: a tool with a friendly interface and good support will gain more adoption among your team.
- Maintain Human Oversight (“Human in the Loop”): No matter how good the AI, in legal matters a human professional must stay involved. All case studies reiterate this. For instance, Tennessee’s expungement project had attorneys review each AI-drafted petition (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute), and the Innocence Project lawyers double-check the AI’s answers against the file (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). This ensures errors or odd suggestions are caught. When deploying an AI system, define checkpoints where human review happens: e.g., a chatbot might answer simple FAQs automatically but refer the user to a human for complex or sensitive questions. Or an AI draft letter is never sent out until a lawyer edits it. This “human in the loop” approach also builds trust – staff will be more willing to use AI if they know they can correct it, and clients are better served when a lawyer puts the final touch on anything affecting their rights (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). In training AI, using human feedback loops (as HCA did) is critical to raise quality (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute). Bottom line: AI can amplify your capabilities, but it’s not a substitute for professional judgment. Treat it as an assistant that works under your supervision.
- Invest in Training and Skill-Building: To harness AI effectively, lawyers and staff need to develop new skills – both in using the tools and in understanding their output. Provide training on how to craft good prompts (questions or instructions) for generative AI, since the way you ask directly affects the answer quality (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). For example, a brief CLE or internal workshop on “prompt engineering” for legal tasks can be very enlightening (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). Also train everyone on the limitations of AI: false confidence in AI output is dangerous, so teach staff what the tools can and cannot do. Sharing success stories and use-case examples from peers (like those 100 use cases from the field study) can spark ideas and demystify AI (Talk Justice, an LSC Podcast: Results from the First Field Study of AI for Legal Aid | LSC - Legal Services Corporation: America's Partner for Equal Justice). Some organizations assign an “AI point person” or create an internal working group to experiment and then disseminate knowledge. Continuous learning is key – AI tech evolves quickly, so encourage a culture where staff regularly discuss what they’ve learned, perhaps in team meetings or a Slack channel for AI tips. As one commentator put it, see “AI mastery as a journey” and be prepared to invest time in learning step by step (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). Starting small helps; even tasks like “use GPT to draft a difficult email reply” can build familiarity before one moves to more complex legal uses (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). The Pro Bono Institute notes that tailored training greatly enhances the ethical and effective use of AI in pro bono settings (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute).
- Collaborate and Share with the Community: The Access to Justice tech community is very collaborative – many legal aid organizations freely share their code, forms, and experiences. Tap into this! For example, LSLA open-sourced its LACI tool for others to use (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid). The field study’s use-case library is publicly available. Stanford’s AI & A2J initiative runs open seminars. Joining networks like Pro Bono Net or state tech listservs can connect you to colleagues working on similar projects. Consider partnering with local law schools or tech nonprofits; they often seek real-world projects for students, and you get help building tools or evaluating AI systems. Also, if budget allows, attend conferences like LSC’s Innovations in Tech conference or ABA Techshow where A2J tech is discussed – the sessions and materials are goldmines for best practices. Avoid reinventing the wheel. Chances are, someone has tried an approach to the problem you’re solving – learn from their wins and mistakes. This extends to developing ethical guidelines: as new standards (and possibly regulations) emerge, share policies and protocols with sister organizations. A rising tide lifts all boats in A2J tech, and collaboration can amplify impact while mitigating risks.
- Plan for Responsible Deployment: Implementing AI in a legal context should be done deliberately. A recommended path (echoing the Stanford workbook and several task force reports) is:
- Design & Research: Before coding anything, clearly define the problem, the users, and the goals. Research legal requirements and ethical concerns up front (e.g., if it’s a public info chatbot, confirm it won’t give legal advice that could be seen as UPL). Think about metrics for success (faster filings? More clients served? Fewer staff hours on X task?) (Design Workbook for Legal Help AI Pilots – Justice Innovation) (Design Workbook for Legal Help AI Pilots – Justice Innovation).
- Prototype & Test: Build a pilot version of the tool and test it in-house or in a controlled setting. Use real (but anonymized) scenarios to see how it performs. Have experienced attorneys evaluate the outputs. This is the time to catch bugs, bias, or inaccuracies. Engage a small group of end-users (like a few clients or volunteers) to get feedback on usability.
- Pilot Launch (Controlled): Roll out the AI tool in one office or for one practice area first. Monitor results closely. Gather data – usage rates, error rates, outcomes (did it actually help people?). Maintain a feedback channel for users to report issues or suggestions.
- Review & Iterate: After the pilot, assess whether the tool met the success metrics. Iron out any problems. Only then consider scaling up to more programs or making it an official part of your processes.
This staged approach mirrors what HCA did and aligns with the “legal sandbox” concept (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). In some jurisdictions, regulatory sandboxes (like Utah’s, or ones proposed in MN (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute)) provide a framework to pilot innovative legal services (including AI-driven ones) under supervision, which can be ideal for responsible deployment. Even if no formal sandbox exists in Texas yet, you can create your own internal oversight for pilots – e.g., inform the bar or a ethics advisor of your plan, get buy-in from leadership, and document your evaluation process. By piloting in a controlled way, you can ensure the AI actually helps and doesn’t unintentionally harm clients or your practice.
- Don’t Neglect the “Analog” Processes: Implementing AI is not just about tech – it’s also about people and workflows. Be prepared to update office procedures and staff roles. For example, if an AI intake bot goes live, intake staff roles might shift to focusing on follow-ups for complex cases. Ensure everyone knows how the new flow works. Update your client service protocols (e.g., what is the process if the chatbot can’t answer a question? Who intervenes?). Also, consider accessibility: always have a non-digital alternative for those who can’t use an online tool (phone or walk-in still available). AI should expand options, not cut off traditional access. And plan for maintenance: who will “train” the AI with new data over time or handle technical issues? It’s wise to assign responsibility (or contract with a tech provider) for ongoing support, so the tool remains effective and up-to-date.
By following these best practices – focusing on clear goals, using vetted tools, keeping humans in charge, educating your team, collaborating widely, and rolling out responsibly – Texas attorneys can leverage AI to amplify their impact in access-to-justice work, while minimizing pitfalls.
AI-Driven Tools for Legal Aid: Key Areas and Examples
AI technologies can assist legal aid organizations in a variety of functional areas. Below are categories of tools (with examples) that are particularly relevant. Rather than a list of product names, we focus on what the tools do and practical recommendations for each function:
- Legal Research and Intelligent Q&A: One of the most mature uses of AI in law is speeding up legal research. AI-driven research tools can digest large volumes of text and answer questions in natural language. For legal aid lawyers with limited time, this means quicker access to relevant case law, statutes, or procedural rules. For example, the California Innocence Project’s team uploaded 50-page case files into an AI system that then answered specific questions about the content (like finding a defendant’s age or spotting inconsistent testimony) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) – tasks that would take a human hours can be done in minutes. Today, major legal research platforms have AI assistants: Westlaw and Lexis are integrating GPT-4 style tools to allow queries like “find me the key points from this case” or “draft a memo on X using the cases in my folder.” There are also standalone tools like Casetext’s CoCounsel (built on OpenAI) and Clearbrief (which uses AI to check brief citations and evidence). Practical tip: When using AI for research, always ask for citations and check those sources. Many AI legal research tools will provide the references (some even quote the text from cases directly). This ensures you can verify that the AI’s answer is grounded in real law, avoiding the infamous “hallucinated” cases problem. Used properly, an AI research assistant can dramatically cut research time – freeing you to focus on analysis and strategy – while covering more ground (e.g., scanning 100 cases for a pattern that you might not have had time to read otherwise). Given the huge volume of legal information out there, this is a prime area where AI can help bridge the justice gap by equipping legal aid attorneys with near-instant knowledge (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). Just remember that AI doesn’t know when it doesn’t know something – it will give an answer, so the attorney must validate it.
- Document Drafting and Automation: Legal aid lawyers draft countless documents – pleadings, motions, client advice letters, forms, etc. AI can assist by generating first drafts that the attorney can then edit. Generative models (like ChatGPT) can produce a decent draft of, say, an expungement motion or a demand letter based on your prompts: you provide key facts and the type of document, and the AI produces a structured starting point. The field study found many lawyers used AI to whip up initial drafts or outlines, then they fact-checked and refined them ( "Generative AI and Legal Aid: Results from a Field Study and 100 Use Ca" by Colleen V. Chien and Miriam Kim ). This can save time especially for standard documents (a simple divorce petition, an answer to an eviction lawsuit, etc.). Additionally, AI can be embedded in document assembly tools to enhance them. Legal aid groups already use document automation (e.g., HotDocs or Docassemble templates) for forms – AI can take this further by also drafting narrative sections or suggesting what language to use for unusual situations. Recall the Tennessee expungement project: they combined ChatGPT’s ability to interpret criminal records with a document assembly program to auto-fill court forms (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). That mix of AI + automation is powerful for any scenario where you’re generating similar documents repeatedly. Tools to consider:
- Open-Source: Docassemble is an open-source platform popular in A2J circles for building guided interviews and forms; while it’s rule-based, developers are experimenting with plugging AI into it for dynamic text generation (e.g., to draft a personalized story in a fee waiver request).
- Commercial: Lawyaw (now part of Clio) and Gavel (formerly Documate) are document automation tools; they are beginning to integrate AI to draft clauses or letters from templates. Even MS Word now has an AI drafting assistant (through Office 365 Copilot).
Practical tip: Start by using AI for boilerplate and repetitive content, not the nuanced legal arguments. For example, use it to draft a generic affidavit format, or populate a lease dispute letter with the relevant statute. Always review and tweak the tone and specifics – AI might not know local court preferences or subtle client circumstances. Ensure any client-specific data the AI uses (names, facts) are correct. Over time, you can build a library of AI prompts or template texts for your common documents, which can standardize quality across your team and speed up the drafting process.
- Client Intake, Triage, and Triage Chatbots: Legal aid organizations often have to screen a large number of applicants to determine who is eligible and what help they need. AI can streamline parts of this intake process. For instance, a chatbot intake on your website can ask potential clients a series of questions to gather basic information (contact, income, opposing party for conflicts, issue category). AI can make this interactive and user-friendly – instead of a long form, it feels like a guided conversation. The bot can then either route the data into your case management system or, if the person is clearly not eligible, give a polite referral to a self-help resource. Some legal aid groups have experimented with this: for example, Gideon is a commercial AI chatbot that some pro bono programs use to pre-screen clients and schedule appointments, and it can integrate with calendars. Another use is triage after intake: AI can read a client’s problem description and help classify the case by legal area or urgency, assisting the intake staff in prioritization. 24/7 availability is a big advantage here – someone can start the intake process at midnight via a chatbot and get basic guidance, rather than waiting for phone hours. North Carolina’s LIA (described above) not only gives legal info but could be seen as an intake pre-step: answering questions might lead someone to realize they need to apply for help, and the bot can direct them how to do so (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute) (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute). In Texas, LSLA’s upcoming “Navi” chatbot will guide users through identifying their legal issues and connect them to resources or referrals (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid) (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid) – that’s essentially AI doing triage and navigation. Practical tip: If deploying an intake bot, involve your intake staff in its design – they know what to ask and how to ask it. Make sure the bot makes no definitive legal conclusions; its job is to gather info and maybe educate, not to decide eligibility (unless it’s obviously using a simple rule like income cutoff). Always provide a clear path to a human (“Click here to talk to a caseworker”) in case the user gets stuck or prefers human help. Also consider language options – with AI translation (see below), it’s very feasible to offer the intake chatbot in Spanish, Vietnamese, etc., to reach more Texans.
- Guided Legal Information and Self-Help Tools: AI can help laypersons navigate legal processes by providing information and guidance in plain language. We already see this with tools like chatbots for FAQs (e.g., the tenant rights chatbots). Imagine a client-facing AI assistant that can walk someone through preparing a simple will, or tell a tenant step-by-step how to request a repair and with what form letters. These tools overlap with document automation and intake, but the emphasis is on education and empowerment. For example, the NYC tenant tool built with Josef explains to tenants what their rights to repairs are under the housing code and generates a letter they can send to the landlord requesting those repairs (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). Another example: the Florida Online Tenant Eviction Answer helper – not AI-driven when it launched, but one can see AI making it more interactive by answering tenant questions during the form prep (“What does ‘service of process’ mean?” – and the AI can explain instantly). The benefit of AI here is its ability to understand a user’s free-text question and give a useful answer or guide them to a relevant form. This can reduce the need for a client to read through dense self-help manuals. Recommendation: If you’re considering such a tool, focus on high-volume issues like housing, family law, or debt collection where many people are pro se. Ensure the content is legally correct and jurisdiction-specific by training the AI on materials from your state (Texas-specific law). Also, implement guardrails: the tool should clarify it’s not a lawyer and is giving general information. Many organizations pair these tools with a live chat or hotline backup – e.g., “If you’re unsure or need further help, here’s how to talk to an attorney.” Done right, these AI helpers can multiply your reach, guiding thousands of people who would otherwise get no help at all.
- Language Translation and Accessibility: Texas has a diverse population with many languages spoken and varying literacy levels. AI language models are adept at translation and interpretation tasks. While not perfect, AI translation has improved vastly and can often capture legal meaning more accurately than older machine translation. For legal aid, this means you can quickly translate client-facing materials (brochures, instructions, even chatbot dialogues) into Spanish, Mandarin, Arabic, etc. It can also mean translating incoming messages or documents from other languages into English for your attorneys. Another use is reading level simplification – AI can rephrase legal text into “plain English” (for example, an AI could take a paragraph of a lease agreement and output a version a 6th grader could understand). This is huge for accessibility: clients with low literacy or cognitive impairments benefit when complex language is broken down. In the field study, translating legalese or English into more accessible language was one of the top use cases lawyers found for AI ( "Generative AI and Legal Aid: Results from a Field Study and 100 Use Ca" by Colleen V. Chien and Miriam Kim ). Tools like Google Translate or DeepL are common, but now with GPT-4, you can often do both translation and simplification in one step (e.g., “Translate this court form into Spanish at a 5th grade reading level”). Tip: Always have a bilingual staffer or certified translator review important translations if possible, because some legal terms might not directly translate and could confuse. But for quick oral communication or preliminary understanding, AI translation is a game-changer. Also consider using AI to generate multilingual chatbots – one bot that can interact in multiple languages (the AI can detect the language and respond accordingly). This could extend your outreach significantly in communities that speak little English. Just be cautious that some languages may have less accurate AI support, so test it with native speakers.
- Internal Knowledge Management and Efficiency: Beyond client-facing uses, AI can make the organization run more efficiently, which indirectly improves A2J by freeing up resources. Examples:
- Email drafting and administrative writing: AI can draft routine emails, newsletter content, social media posts, grant proposals, or even board reports. In one report, a legal aid leader mentioned an AI tool helped draft a grant application in 30 minutes versus a couple of days normally (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). These “back-office” tasks often eat up staff time. If AI can produce a decent first draft, your staff can refine and complete it in far less time.
- HR and Training: Need interview questions for a new paralegal position? AI can generate a list of relevant questions (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). Want to create a first-cut of a performance review based on some input notes? AI can outline it. This doesn’t directly serve clients, but it streamlines management.
- Knowledge Repositories: AI can help surface information in your internal knowledge bases. If your program has a collection of memos or brief bank, an AI search tool could let an attorney query it in plain language (“Do we have a sample brief on Texas DV protective orders?”) and get an answer pointing to the document. This is like having a smarter intranet search. Some organizations are exploring building their own “private GPT” trained on their briefs, templates, and articles – enabling staff to get answers tailored to their internal best practices.
- Case Outcome Prediction / Triage: In experimental stages, some have tried predictive analytics – e.g., an AI predicts which cases are likely to win or which clients have the highest need – to help allocate resources. This is tricky ethically and data-wise (and not yet widely used in legal aid), but as data accumulates, we might see AI advising where limited pro bono time is best spent for maximum impact.
Note: These internal uses might not grab headlines, but they can improve an organization’s capacity. If AI can reduce hours spent on grant writing or clerical work, that’s more hours for client service or more cases closed. As the Thomson Reuters “generational opportunity” article noted, “GenAI can function as a marketing assistant... HR aide... [and] makes quick work of grant applications”, thereby freeing up valuable time for the core mission (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute). Texas programs might consider starting here – it’s often easier to experiment internally (less risk, no client data exposure if done carefully) and the payoff is a more efficient operation.
In summary, AI tools are emerging in virtually every aspect of legal aid work: from client engagement to courtroom preparation to office administration. The key is to adopt those that solve real problems for your practice and to use them in a way that complements your team. Always pilot test new tools and get feedback from your staff and clients on whether it’s actually making things easier. When used appropriately, AI can help legal services offices serve more clients, more effectively, by automating the drudgery and extending your reach (Highlight of the Issues).
(Avoid simply chasing shiny tech; instead, match the tool to a need – whether it’s cutting down a backlog of research, making a form accessible in Spanish, or giving pro se litigants a guiding hand. The references provided can help you explore specific tools in depth.)
Ethical Considerations for AI in Access to Justice
Using AI in legal services raises important ethical and professional questions. Texas attorneys must be mindful of both general AI ethics in law and specific issues that arise in the A2J context. Below are key considerations, with guidance from reputable sources:
- Attorney Supervision, Competence and Duty of Care: No matter how advanced an AI tool is, it cannot practice law; the attorney is ultimately responsible. The ABA Formal Op. 512 makes it clear that using AI is akin to using a junior lawyer or a paralegal – you must supervise its work and ensure it meets the standard of competence (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). This means attorneys should understand the basics of how their AI tools work, their limitations, and be able to explain and justify the AI’s output as if it were their own work. In an A2J setting, where clients might be vulnerable or relying completely on you, it’s even more critical. If an AI draft makes a legal assertion, double-check it. If an AI suggests a client is not eligible for something, verify that before acting on it. In short, do not outsource your professional judgment. Several bar opinions (e.g., Kentucky’s and D.C.’s in 2024) have echoed that tech competence is part of lawyer competence – you have to stay educated on AI to use it responsibly (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). If you lack the knowledge to evaluate an AI tool, get training or expert help before using it in practice.
- Accuracy, Accountability, and Avoiding “Hallucinations”: AI language models sometimes generate incorrect information that sounds confident. This is unacceptable in legal practice if left unchecked. Ethically, providing false or misleading information to a court or client – even accidentally via AI – could breach duties of candor or diligence. To mitigate this:
- Always verify AI-generated content against trusted sources. If an AI tool cites cases, read the cases (at least the relevant parts) to ensure they say what the AI claims. If the AI doesn’t provide cites (like ChatGPT by itself), you must do the follow-up research. There have been high-profile incidents of lawyers sanctioned for filing AI-drafted briefs with fake case citations – a scenario to avoid at all costs.
- Use AI ideally as a supplement to, not a wholesale replacement for, traditional research and analysis. For example, you might use it to get a quick overview or a theory, but then confirm via Westlaw/Lexis and your own reasoning.
- If an AI is summarizing evidence or client stories, double-check it captured details correctly. Minor errors can have major consequences (e.g., summarizing a conviction incorrectly could affect expungement eligibility).
- The PBI’s AI Ethics report notes that accuracy is a paramount concern that affects A2J – if AI outputs are inaccurate, they can mislead those who don’t have other access to legal help (Highlight of the Issues). So we must hold AI tools to a high standard and not trust without verification.
On the flip side, AI can help improve accuracy by catching human errors (like inconsistent facts in a file), but only if used carefully. Ultimately, accountability lies with the lawyer or organization deploying the AI.
- Privacy, Confidentiality, and Data Security: Legal services often deal with sensitive personal information. If you use a cloud-based AI service, consider what information you are feeding it. Many AI tools collect input data by default to improve the model, which could mean confidential client facts being stored on someone else’s server. This raises attorney-client confidentiality issues (Texas Disciplinary Rules and ABA Model Rule 1.6). Mitigation strategies:
- Check the AI tool’s privacy policy. For instance, OpenAI allows users to opt-out of data being used to train the model – you should do that for client data. Some vendors offer on-premises or private cloud instances for higher security.
- Anonymize or redact client-identifying details before inputting into AI if possible. For example, use initials instead of full names, and alter trivial details that don’t affect the legal analysis.
- Ensure any third-party tool is secure (HTTPS, encryption) and the company has robust security practices. A data breach at the AI provider could expose client info.
- Get client consent if appropriate. Some ethics opinions (e.g., Florida’s AI advisory) suggest informing clients about AI use, particularly if sensitive data will be shared with a tool (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute).
- Texas’s TRAIL report emphasizes cybersecurity and privacy due diligence when adopting AI (Microsoft Word - cxx EF 2023 1213 Interim Report to the Board -- Taskforce for Responsible AI in the Law) (Microsoft Word - cxx EF 2023 1213 Interim Report to the Board -- Taskforce for Responsible AI in the Law). Make this a part of your implementation checklist.
- If your AI is internal (like LSLA’s internal chatbots), you have more control. But even internal AI might use external APIs. Work with your IT folks to vet these connections.
In short, treat AI like any cloud vendor under ABA Formal Opinion 477R and others: you must make reasonable efforts to prevent inadvertent or unauthorized disclosure of client information.
- Bias and Fairness: One ethical goal in Access to Justice is to reduce bias and inequality, but there’s a risk that AI could inadvertently worsen biases if not checked. AI models trained on large datasets might reflect societal biases (e.g., in language, or which problems get more “attention” in the data). For instance, an AI might give more comprehensive answers about topics that have lots of online content (say, landlord issues) and shorter answers about issues affecting marginalized groups with less online presence (say, tribal law issues), thus subtly skewing the help available. Or a predictive model could be less accurate for underrepresented demographics if the training data didn’t include enough of them. The ABA Center for Innovation explicitly notes that AI’s utility for A2J “will depend on ... its avoidance of biases” (Highlight of the Issues). To uphold equity:
- Test AI outputs for bias. If you have an AI triage system, periodically review if it’s recommending different outcomes for similarly situated people. If you have a chatbot, ensure it’s giving the same quality of info regardless of how the user writes (grammar, dialect, etc., which could correlate with education or background).
- Include diverse data in training. If you’re training a model on past cases, ensure it’s not just reflecting any biased decisions from those cases. This is tricky, but awareness is step one.
- Many state bar reports (New York, California) emphasize bias: California’s guidance specifically tells lawyers to avoid relying on AI that might be biased and to double-check for fairness (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute).
- For legal aid, an important aspect is accessibility bias: We must ensure AI tools are accessible to people with disabilities (compatibility with screen readers, etc.) and those with limited tech access. Otherwise we create a new bias – serving the tech-savvy better than others.
Always ask: Is this AI tool helping all my clients, or only some? Does it potentially disfavor a group? If yes, adjust or maybe forego that tool.
- Transparency (to Clients, Courts, and the Public): Being transparent about AI use is emerging as an ethical norm. Clients should not be misled into thinking they’re talking to a human if they’re not. If a document was heavily drafted by AI, the attorney might consider disclosing that to the client (especially if it helps explain any unusual wording or to reinforce that the attorney reviewed it). Some jurisdictions might require disclosure to courts in some situations – for example, if AI translation was used for a witness statement, a court might need to know that to decide if a certified translator’s affidavit is needed. We saw in the ethics opinions: the D.C. Bar (April 2024) said lawyers must be mindful that using AI doesn’t absolve them from candor, and they warned about reliability (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute); and Kentucky’s opinion said lawyers should inform clients about AI use, particularly if it affects cost or confidentiality (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). Good practice: If you use an AI tool in a client’s case in any significant way, explain it to the client in plain terms. E.g., “We have a software tool that helps draft documents. It will generate a first draft of your petition which I will then carefully review and edit. This helps us serve you faster, but I will ensure it’s accurate.” Most clients will appreciate the clarity. For public-facing tools, have clear notices. For instance, North Carolina Legal Aid’s website notes that LIA is a chatbot and not a lawyer, to manage user expectations. Transparency builds trust and also protects you ethically, as it shows you’re not concealing the nature of the service.
- Avoiding Unauthorized Practice of Law (UPL) and Ensuring Quality Control: One big question: if an AI directly gives legal advice to a person without lawyer supervision, is that the unauthorized practice of law? In Texas, as in most states, only licensed attorneys can give legal advice. Legal information, however, can be given by non-lawyers. The line can blur. If your AI tool is purely informational (“This form is used for X, here’s how you fill it out”), it’s likely okay. But if it advises (“Given what you told me, you should file for bankruptcy” or drafts a custom legal document for them), it could cross into advice. The Minnesota report specifically tackled this by suggesting a regulated sandbox to allow AI legal advice with oversight (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). Until there’s clearer guidance in Texas, it’s safest to design AI tools that assist lawyers or provide general information, rather than act independently as a lawyer. If you do venture into automated advice (some startups like DoNotPay tried this, with controversy), be extremely careful: maybe have a lawyer review each advice instance behind the scenes, or limit it to filling forms under lawyer direction. Also, monitor the quality: even if not technically UPL, bad legal info can harm users. As a legal aid provider, you have an ethical duty (and often funder requirements) to provide accurate assistance. So treat an AI tool like a junior lawyer clinic intern – you wouldn’t send them to counsel clients alone on day one; you’d supervise and check their work until you’re confident. The same with an AI system: monitor its interactions if it’s client-facing.
- Billing and Funding Ethics: While not immediately obvious, AI can raise issues in how we bill or report our work. For pro bono and legal aid, the issue is less about billing clients (since services are free) and more about funding and grant reporting. If AI makes you super efficient, you might handle more cases in less time. That’s great for service, but consider metrics: Some grants count “hours spent” – if hours drop but outcomes improve, make sure to document the efficiency gain so it’s seen positively. Conversely, if your organization does have any fee-generating work or recovers attorney fees, be cautious: multiple ethics opinions (e.g., Florida’s (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute)) forbid charging twice for the same work (you can’t bill a client for an AI’s time and also for your review time as separate entries, for instance). In California, the guidance explicitly says don’t bill clients for time you saved because you used AI (that would be dishonest) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute). In legal aid, you’re not billing clients, but you shouldn’t overstate time on cases either. It might be wise to shift focus to outcome-based success metrics (forms completed, cases closed) in reporting, rather than hours, if AI significantly cuts the hours needed. The Illinois report about the billable hour (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) is a forward-looking note that the whole legal industry might need to adjust value models – in pro bono, value is service to client, so AI’s efficiency is a pure win as long as quality is maintained.
- Maintaining the Human Touch and Addressing Client Concerns: Ethical lawyering in legal aid isn’t just about rules – it’s also about empathy, listening, and counseling. One risk of AI in A2J is the temptation to let the technology handle more and more, potentially alienating clients who really need human reassurance. Remember that many clients in crisis benefit from speaking to a compassionate person. An AI chatbot might give correct info, but it won’t (currently) replicate a human lawyer’s understanding of a client’s emotional situation. Be mindful of when to pull a client out of the automated system into a personal conversation. For example, if a domestic violence survivor is interacting with an online tool, at some point a warm handoff to a human who can safety-plan and empathize is important. From an access-to-justice perspective, equity and fairness mean appropriate service: those who only need a quick answer can get it from AI, but those who need more hand-holding should still get human assistance. Design your AI systems to recognize their limits. Perhaps set a rule like: if a user asks the same question twice or seems confused, prompt them to call or chat with a person. Or simply make it easy to opt out to human help at any time. This ensures that efficiency gains don’t come at the cost of client care. The ethical principle at stake is not a formal rule but the mission of legal aid – treating clients with dignity and providing meaningful access. AI should enhance, not replace, the human elements of justice. The ABA’s take is optimistic: if done right, AI can make “accurate, understandable legal information readily available” and free lawyers to serve more clients (Highlight of the Issues). We must pair that availability with the understanding that some problems still require a lawyer’s heart and mind, and always will.
- Staying Within Regulatory Bounds and Informed of Changes: Finally, the landscape of AI governance is evolving. New ethics opinions, court rules (some courts now require disclosure if a filing was AI-written), and possibly legislation are on the horizon. For instance, Texas’s task force may produce final recommendations; the ABA may update model rules; other states might implement sandbox programs or certification systems for AI legal tech. Make it a point to stay informed. This might mean checking the Texas Bar Journal or the State Bar’s website for updates (the TRAIL task force work is one to watch). Join forums or CLEs on law practice technology. And if you’re ever unsure about an ethical issue with your AI use, don’t hesitate to use resources like the State Bar ethics hotline or an ethics CLE, because a question that seems novel now (e.g., “Can I allow my chatbot to give this advice?”) may have been addressed by an ethics body by then. In sum, treat AI like any tool under the Texas Disciplinary Rules: use it in a way that upholds your duties of competence, confidentiality, loyalty, and professionalism.
Conclusion: AI offers tremendous promise to improve access to justice by enhancing the capacity of lawyers and empowering the public with information. By educating ourselves with up-to-date resources, learning from real implementations, following best practices, utilizing the right tools, and adhering to ethical principles, Texas attorneys can confidently integrate AI into their legal aid and pro bono work. The result can be a win-win: attorneys get support in handling heavy caseloads, and clients (or self-represented individuals) get faster, more efficient, and sometimes even previously impossible assistance (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (Highlight of the Issues). The resources and examples provided above should serve as a roadmap for exploring AI in your own practice, always with the ultimate goal in mind – bridging the justice gap and serving the community.
Sources:
- Colleen V. Chien & Miriam Kim, Generative AI and Legal Aid: Results from a Field Study and 100 Use Cases to Bridge the Access to Justice Gap, 57 Loy. L.A. L. Rev. 903 (2025) ( "Generative AI and Legal Aid: Results from a Field Study and 100 Use Ca" by Colleen V. Chien and Miriam Kim ) ( "Generative AI and Legal Aid: Results from a Field Study and 100 Use Ca" by Colleen V. Chien and Miriam Kim ).
- ABA Formal Opinion 512 (2024) – Ethical Implications of Lawyers Using Generative AI (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute).
- Pro Bono Institute, AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice (Aug. 29, 2024) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute).
-
State Bar of Texas, Task Force on Responsible AI in the Law – Interim Report (Dec. 2023) (AI Ethics in Law: Emerging Considerations for Pro Bono Work and Access to Justice - Pro Bono Institute) (Microsoft Word - cxx EF 2023 1213 Interim Report to the Board -- Taskforce for Responsible AI in the Law).
- Thomson Reuters Institute, AI and Legal Aid: A Generational Opportunity for Access to Justice (Feb. 3, 2025) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute) (AI and legal aid: A generational opportunity for access to justice - Thomson Reuters Institute).
- Thomson Reuters Institute, “AI for Legal Aid” case study series (2024): How to Supercharge Legal Services Organizations (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute) (AI for Legal Aid: How to supercharge legal services organizations - Thomson Reuters Institute); How to Empower Clients in Need (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute) (AI for Legal Aid: How to empower clients in need - Thomson Reuters Institute).
- Legal Services Corporation – Talk Justice Podcast, Episode “Results from the First Field Study of AI for Legal Aid” (June 2024) (Talk Justice, an LSC Podcast: Results from the First Field Study of AI for Legal Aid | LSC - Legal Services Corporation: America's Partner for Equal Justice) (Talk Justice, an LSC Podcast: Results from the First Field Study of AI for Legal Aid | LSC - Legal Services Corporation: America's Partner for Equal Justice).
- Stanford Legal Design Lab – AI & Access to Justice Initiative (Website resources and AI Design Workbook) (Design Workbook for Legal Help AI Pilots – Justice Innovation) (Design Workbook for Legal Help AI Pilots – Justice Innovation).
- Lone Star Legal Aid Press Release, LSLA Secures Three TIG Grants to Revolutionize Legal Services (Jan. 29, 2025) (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid) (Lone Star Legal Aid Secures Three TIG Grants to Revolutionize Legal Services - Lone Star Legal Aid).
- ABA Center for Innovation, Task Force on Law and AI – Highlights (AI and A2J) (Highlight of the Issues).
DEEP SEEK Deep Research Example:
AI-Driven Access to Justice: Current Resources and Strategic Implementations for Texas Attorneys
The integration of artificial intelligence (AI) into legal practice represents a transformative opportunity to bridge the justice gap, particularly for underserved populations. In Texas, where approximately 5.7 million low-income residents face civil legal issues annually without adequate representation, AI tools and frameworks offer scalable solutions to democratize access to legal information, streamline workflows for legal aid organizations, and empower self-represented litigants5820. This report synthesizes current resources, case studies, and best practices for Texas attorneys and legal professionals seeking to leverage AI responsibly within the access-to-justice (A2J) ecosystem.
I. Foundational Educational Resources for AI in Legal Practice
A. Institutional Training Programs
The Stanford Legal Design Lab’s AI & Access to Justice Initiative serves as a critical hub for interdisciplinary research, offering free webinars, white papers, and benchmarking tools that evaluate AI performance in legal problem-solving3. Their 2024 Roadmap for AI and Access to Justice outlines priority areas such as eviction defense automation and reentry services, with downloadable training modules on developing ethical AI chatbots for tenant rights320. Complementing this, the National Center for State Courts (NCSC) hosts recurring workshops like Tech for All: Applications of AI to Increase Access to Justice, which provides Texas-specific data on algorithmic bias mitigation in family court document automation117.
For judicial education, the Texas Center for the Judiciary has launched a certification program on AI oversight, covering prompt engineering for court clerks and validation protocols for AI-generated legal summaries916. This aligns with the State Bar of Texas’ mandate requiring 1 hour of AI ethics training within the 15-hour CLE cycle, with materials accessible through the Bar’s online portal1216.
B. Open-Access Research Repositories
The Thomson Reuters AI in Courts Resource Center curates a searchable database of 1,200+ annotated case studies, including real-world implementations like the Alaska Court System’s chatbot that reduced pro se filing errors by 38% in 2024221. Texas-specific datasets are highlighted, such as Harris County’s NLP analysis of 50,000 eviction cases identifying predatory lease clauses17.
Academic contributions include the University of Pittsburgh’s Fairness in AI Legal Summarization Project, which offers open-source algorithms trained on Texas appellate decisions to generate plain-language case summaries4. The Stanford Legal Design Lab further provides downloadable templates for AI-augmented intake forms, tested across 14 legal aid clinics in Bexar and Travis counties322.
II. Operational AI Tools for Legal Service Delivery
A. Document Automation Platforms
Houston.AI, developed by LegalServer, integrates machine learning with Texas-specific legal databases to automate intake processes for 32 legal aid organizations statewide316. The platform’s natural language processing (NLP) engine categorizes housing issues with 94% accuracy, reducing intake time from 45 minutes to 8 minutes per case3. For courtroom applications, Briefpoint (recently integrated with Smokeball) automates discovery response drafting using GPT-4 trained on Texas Rules of Civil Procedure, achieving 100% compliance in beta tests at Lone Star Legal Aid1820.
B. Predictive Analytics and Triage Systems
The Massachusetts Defense for Eviction Model (MADE), adapted for Texas by Baylor Law School, uses logistic regression to predict case outcomes with 82% accuracy, prioritizing high-risk tenants for attorney assignment20. Similarly, Rentervention—a chatbot deployed in Dallas County—combines NLP with Texas Property Code annotations to provide step-by-step eviction defense strategies, used by 4,300 tenants in Q3 20242022.
C. Multilingual Legal Assistants
AVA (Assisted Virtual Advocate), piloted by the Texas Access to Justice Commission, processes Spanish and Vietnamese queries through a hybrid rules-based/generative AI system, achieving LegalServ certification for accuracy in 87% of unemployment appeal scenarios21. The tool cross-references Texas Workforce Commission rulings and generates formatted writs compatible with e-filing systems in all 254 counties1621.
III. Ethical Implementation Frameworks
A. Bias Auditing Protocols
The Texas Ethical AI Checklist, codified in Rule 13 of the Texas Rules of Civil Procedure, requires attorneys to:
- Disclose AI usage in filings per Western District Standing Order 2024-071114
- Validate outputs against the Texas State Law Library’s AI Hallucination Detection Tool, which flags fictitious citations using blockchain-verified case law1416
- Conduct quarterly bias audits using the Stanford AI Bias Assessment Framework, now mandated for LSC-funded organizations38
B. Client Consent and Data Security
Revised Comment 8 to Texas Rule 1.05 mandates written AI use disclosures, including risks of confidential data exposure in public chatbots1216. The Texas Bar’s SecureAI Toolkit provides encrypted instance deployments for Clio and MyCase, ensuring client data remains within Texas-based servers compliant with HB 4 (2024) data localization requirements1216.
IV. Institutional Case Studies
A. Travis County Self-Help Portal
In collaboration with the Texas Legal Services Center, Travis County deployed a GPT-4-powered chatbot that handled 23,000 family law inquiries in 2024, reducing courthouse wait times by 62%1621. The system routes complex issues to volunteer attorneys via integrated videoconferencing, maintaining a 4.8/5 user satisfaction rating21.
B. Texas RioGrande Legal Aid’s Debt Analyzer
TRLA’s custom AI model scans credit reports to identify usurious loans under Texas Finance Code §302.001, recovering $2.1 million in illegal interest charges for 1,400 clients in 202420. The tool interfaces directly with JP Morgan Chase’s API for automated settlement offers, cutting negotiation time from 90 days to 72 hours20.
V. Emerging Challenges and Strategic Recommendations
A. Bridging the Digital Divide
While 68% of Texas legal aid organizations now use AI tools, adoption lags in rural counties due to limited broadband access516. The Texas Access to Justice Commission’s Mobile Justice Unit Initiative—equipping 15 bookmobiles with Starlink terminals and tablet-based AI assistants—has served 12,000 rural residents since 2023, providing a replicable model for other states1621.
B. Regulatory Evolution
Pending legislation (SB 1487) proposes creating an AI Legal Sandbox under the Supreme Court of Texas, allowing limited unauthorized practice of law exemptions for certified chatbots handling uncontested divorces and name changes1016. The TRAIL 2024 Summit Report further recommends establishing a statewide AI oversight committee with binding authority to decertify noncompliant systems16.
C. Longitudinal Impact Tracking
The Texas Bar Foundation is funding a 5-year study tracking AI’s impact on pro se success rates, with preliminary data showing 22% higher favorable outcomes in AI-assisted small claims cases versus unassisted litigants1621. Real-time dashboards displaying these metrics will launch on TexasLawHelp.org in Q2 20251416.
Conclusion
The strategic deployment of AI in Texas’ legal ecosystem demonstrates measurable success in expanding access to justice, from automated document assembly to predictive legal analytics. However, as Judge Xavier Rodriguez cautioned in the 2024 Western District AI Symposium, “Technology without trained navigators risks becoming another barrier”11. The resources outlined herein provide attorneys with both the technical tools and ethical frameworks necessary to harness AI’s potential while upholding the profession’s core values. Continued collaboration between TRAIL, the Texas Access to Justice Commission, and legal aid innovators will be paramount as these technologies evolve91621.
Citations:
- https://www.ncsc.org/conferences-and-events/events-calendar/2025/webinars/january/webinar-tech-for-all-applications-of-ai-to-increase-access-to-justice
- https://www.thomsonreuters.com/en-us/posts/ai-in-courts/ai-in-courts-resource-center-launches/
- https://justiceinnovation.law.stanford.edu/projects/ai-access-to-justice/
- https://www.research.pitt.edu/ai-tools-help-increase-access-justice
- https://www.thomsonreuters.com/en-us/posts/ai-in-courts/ai-legal-aid-generational-opportunity/
- https://www.scl.org/generative-ai-redefining-access-to-justice/
- https://www.americanbar.org/groups/centers_commissions/center-for-innovation/artificial-intelligence/access-to-justice/
- https://www.probonoinst.org/2024/08/29/ai-ethics-in-law-emerging-considerations-for-pro-bono-work-and-access-to-justice/
- https://www.texasbar.com/AM/Template.cfm?Section=Meeting_Agendas_and_Minutes&Template=%2FCM%2FContentDisplay.cfm&ContentID=64635
- https://www.texasbar.com/AM/Template.cfm?Section=articles&Template=%2FCM%2FContentDisplay.cfm&ContentID=63343
- https://www.everlaw.com/blog/ai-and-law/responsibly-diving-into-generative-ai-with-judge-xavier-rodriguez/
- https://www.tlie.org/resource/ethical-implications-of-using-ai-for-texas-attorneys
- https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/
- https://texaslawhelp.org/article/artificial-intelligence-as-a-legal-help-tool
- https://www.texasbar.com/AM/Template.cfm?Section=articles&Template=%2FCM%2FHTMLDisplay.cfm&ContentID=61186
- https://www.texenrls.org/wp-content/uploads/2024/07/Taskforce-for-Responsible-AI-in-the-Law-2024-Summit-Report.pdf
- https://www.thomsonreuters.com/en-us/posts/government/leveraging-genai-tools-courts/
- https://www.smokeball.com/blog/7-ai-apps-for-your-legal-toolbox
- https://www.abajournal.com/columns/article/access-to-justice-20-how-ai-powered-software-can-bridge-the-gap
- https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4876633
- https://www.thomsonreuters.com/en-us/posts/ai-in-courts/chatbots-pro-se-litigants/
- https://law.stanford.edu/juelsgaard-intellectual-property-and-innovation-clinic/ai-and-access-to-justice/
- https://yjolt.org/access-ai-justice-avoiding-inequitable-two-tiered-system-legal-services
- https://nacmnet.org/wp-content/uploads/AI-and-Access-to-Justice-Final-White-Paper.pdf
- https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20241010-texas-attorney-ags-office-reaches-settlement-with-ai-company-over-deceptive-claims
- https://www.reuters.com/legal/government/texas-lawyer-fined-ai-use-latest-sanction-over-fake-citations-2024-11-26/
- https://www.reuters.com/technology/artificial-intelligence/ai-hallucinations-court-papers-spell-trouble-lawyers-2025-02-18/
- https://news.bloomberglaw.com/litigation/lawyer-sanctioned-over-ai-hallucinated-case-cites-quotations
- https://www.reuters.com/legal/transactional/us-judge-orders-lawyers-sign-ai-pledge-warning-they-make-stuff-up-2023-05-31/
- https://www.texasbar.com/AM/Template.cfm?Section=articles&Template=%2FCM%2FHTMLDisplay.cfm&ContentID=46315
- https://lawlibguides.luc.edu/c.php?g=1301896&p=9566357
- https://www.txcourts.gov/media/1456719/generative-ai-presentation.pdf
- https://www.youtube.com/watch?v=cmTKsWyUIHA
- https://mediate.com/generative-ais-ability-to-transform-access-to-the-harris-county-family-district-courts-rules/
- https://www.linkedin.com/pulse/empowering-justice-how-ai-reshaping-legal-aid-access-ulysses-jaen-dktse