Understanding and Mitigating AI-Related Liability and Risk in Legal Practice

AI's increasing sophistication means attorneys must understand the potential for AI-driven errors to translate into professional liability. Unlike traditional software tools, the probabilistic nature of many AI systems, particularly Large Language Models (LLMs), means outputs are not always predictable, accurate, or verifiable without independent effort. 

Texas attorneys should be mindful of the risks inherent in using AI and should look for ways to mitigate those risks. 

Errors and Inaccuracies

AI models can "hallucinate," generating plausible-sounding but entirely false information, including fabricated case citations, statutes, or legal principles. Relying on such outputs without thorough verification can lead to court sanctions (as seen in cases likeMata v. Avianca, Inc.) and form the basis for a malpractice claim if the inaccuracy harms the client's case. 

Mitigation Strategy

  • Integrate human oversight into every step of AI-assisted work, particularly when drafting legal documents, conducting analyses, or providing advice to clients.
  • Verify AI outputs against authoritative legal sources and original documents before relying on or submitting them.
  • Use clear and specific prompts AI output quality.
  • Establish internal quality control procedures such as standardized workflows, checklists, and peer-review processes to systematically catch and correct errors.

Confidentiality Breaches

Sharing sensitive client information with non-enterprise AI platforms, especially general-purpose tools that may lack robust data security or confidentiality guarantees, risks unauthorized disclosure. This violates the attorney's strict duty of confidentiality (Texas Disciplinary R. Prof. Conduct 1.05). Even if disclosure is accidental, the failure to take reasonable precautions can lead to disciplinary action and liability. 

Mitigation Strategy

  • Select AI vendors with clearly-defined, robust confidentiality and data security measures.
  • Include explicit contractual provisions mandating confidentiality, data encryption, and stringent access controls.
  • Anonymize or redact sensitive client details before inputting data into AI tools, ensuring no identifiable information is disclosed.

Bias and Discrimination

AI models are trained on vast datasets that may contain societal biases. If used without oversight, AI outputs could potentially perpetuate or introduce biases into legal analysis, strategies, or decisions, potentially leading to discriminatory outcomes for clients. This could violate ethical duties and non-discrimination laws, opening avenues for liability.

Mitigation Strategies

  • Apply critical human judgment and independently verify AI-generated recommendations, especially in contexts where biased outcomes could significantly impact clients.
  • Use AI platforms trained on diverse, representative data to minimize embedded societal biases and improve equitable outcomes.
  • Choose AI tools that offer clear explanations of their processes and decision-making criteria, allowing attorneys to spot potential biases and challenge unfair results.

Over-reliance and Failure to Supervise

Treating AI as an infallible authority instead of a tool can lead to neglecting independent legal analysis and verification. Failure to adequately supervise the use of AI by lawyers and non-lawyer staff, including reviewing AI outputs, is a direct violation of supervisory duties (Texas Disciplinary R. Prof. Conduct 5.01, 5.03) and a significant malpractice risk. 

Mitigation Strategies

  • Implement a regular training program to ensure attorneys and staff understand the strengths, weaknesses, and evolving nature of AI tools, reinforcing best practices and vigilance in AI use.
  • Establish internal quality control procedures such as standardized workflows, checklists, and peer-review processes to systematically catch and correct errors.

Failure of Communication and Informed Consent

Texas Disciplinary R. Prof. Conduct 1.03 requires keeping clients reasonably informed and explaining matters sufficiently to permit informed decisions. This extends to the use of AI tools. Attorneys are not expected to communicate the use of all AI tools. (In fact, disclosing all use of AI would be next to impossible, given that AI is often embedded in software layers that are not visible to the user.) However, attorneys should be prepared to discuss the potential benefits, risks (e.g., accuracy, confidentiality), and limitations of using AI with a client and obtain informed consent if a client asks about AI use or if an attorney adopts and AI-forward approach to integrating AI within their legal practice. 

Mitigation Strategies

  • Develop clear AI disclosure policies specifying when and how to communicate AI usage, including potential risks, benefits, and limitations.
  • Draft standardized language to include in your engagement agreements and client correspondence to consistently inform clients about the firm's use of AI tools.
  • Obtain explicit informed consent from clients when utilizing AI prominently in their representation, particularly when significant client rights or outcomes could be impacted.
  • Train attorneys and staff to effectively discuss AI use in straightforward, accessible language, focusing on accuracy, confidentiality, and potential biases or limitations.
  • Keep detailed records of AI-related discussions and consents, documenting client awareness and approval.
  • Regularly revisit the discussion of AI tools with clients, particularly when adopting new technologies or significantly altering their use within the legal practice.

Billing Irregularities

While AI can increase efficiency, billing practices must remain reasonable (Texas Disciplinary R. Prof. Conduct 1.04). Billing clients for time saved by AI as if it were human billable hours, or charging unreasonable fees for AI-assisted tasks, raises ethical concerns and potential liability. 

Mitigation Strategies

  • Adopt transparent billing practices that clearly communicate to clients how AI tools affect billing, explicitly distinguishing AI-assisted tasks from traditional attorney hours.
  • Pass efficiency gains to clients by adjusting billing to ensure clients directly benefit from time efficiencies achieved through AI use.
  • Incorporate explicit language in fee agreements regarding AI usage, outlining how AI-generated efficiencies or costs (e.g., subscription fees) will be reflected in client billing.
  • Clearly identify and itemize direct AI-related expenses (like software subscription or usage fees) separately from traditional legal fees in client invoices.

While Texas-specific case law on AI malpractice is still developing, recent instances of judges sanctioning attorneys for submitting AI content without verifying the results (see Rochon-Eidsvig v. JGB Collateral, LLC, Gauthier v. Goodyear Tire & Rubber Co. (Texas) and Mata v. Avianca, Inc. (New York)) clearly signal that courts expect attorneys to maintain human oversight and verify AI outputs. The absence of extensive precedent does not mean the absence of risk; instead, existing legal and ethical principles are being applied to this new technology. 

Key Resources