Resources & CLE

  • AI-Powered Legal Aid Initiatives
  • Case Study: AI Chatbots in Court Self-Help Centers
  • Research: Impact of AI on Pro Bono Services
  • Guide: Implementing AI in Legal Aid Organizations

CLE Resources

  • Online Courses
  • Webinars
  • Certification Programs

Guidelines for U.S. Judicial Officers Regarding the Responsible Use of Artificial Intelligence  

These Guidelines are intended to provide general, non-technical advice about the use of artificial intelligence (AI) and generative artificial intelligence (GenAI) by judicial officers and those with whom they work in state and federal courts in the United States. As used here, AI describes computer systems that perform tasks normally requiring human intelligence, often using machine-learning techniques for classification or prediction. GenAI is a subset of AI that, in response to a prompt (i.e., query), generates new content, which can include text, images, sound, or video. While the primary impetus and focus of these Guidelines is GenAI, many of the use cases that are described below may involve either AI or GenAI, or both. These Guidelines are neither intended to be exhaustive nor the final word on this subject.  

FUNDAMENTAL PRINCIPLES  

  • An independent, competent, impartial, and ethical judiciary is indispensable to justice in our society. This foundational principle recognizes that judicial authority is vested solely in judicial officers, not in AI systems. While technological advances offer new tools to assist the judiciary, judicial officers must remain faithful to their core obligations of maintaining professional competence, upholding the rule of law, promoting justice, and adhering to applicable Canons of Judicial Conduct.  
  • In this rapidly evolving landscape, judicial officers and those with whom they work must ensure that any use of AI strengthens rather than compromises the independence, integrity, and impartiality of the judiciary. Judicial officers must maintain im-partiality and an open mind to ensure public confidence in the justice system. The use of AI or GenAI tools must enhance, not diminish, this essential obligation.  
  • Although AI and GenAI can serve as valuable aids in performing certain judicial functions, judges remain solely responsible for their decisions and must maintain proficiency in understanding and appropriately using these tools. This includes recognizing that when judicial officers obtain information, analysis, or advice from AI or GenAI tools, they risk relying on extrajudicial information and influences that the parties have not had an opportunity to address or rebut.  

The promise of GenAI to increase productivity and advance the administration of justice must be balanced against these core principles. An overreliance on AI or GenAI undermines the essential human judgment that lies at the heart of judicial decision-making. As technology continues to advance, judicial officers must remain vigilant in ensuring that AI serves as a tool to enhance, not replace, their fundamental judicial responsibilities.  

UNDERSTANDING HOW AI WORKS AND THE IMPLICATIONS 

Judicial officers and those with whom they work should be aware that GenAI tools do not generate responses like traditional search engines. GenAI tools generate content using complex algorithms, based on the prompt they receive and the data on which the GenAI tool was trained. The response may not be the most correct or accurate answer. Further, GenAI tools do not engage in the traditional reasoning process used by judicial officers. And, GenAI does not exercise judgment or discretion, which are two core components of judicial decision-making. Users of GenAI tools should be cognizant of such limitations.  

Users must exercise vigilance to avoid becoming “anchored” to the AI’s response, sometimes called “automation bias,” where humans trust AI responses as correct without validating their results. Similarly, users of AI need to account for confirmation bias, where a human accepts the AI results because they appear to be consistent with the beliefs and opinions the user already has. Users also need to be aware that, under local rules, they may be obligated to disclose the use of AI or GenAI tools, consistent with their obligation to avoid ex parte communication.  

Ultimately, judicial officers are responsible for any orders, opinions, or other materials which are produced in their name. Accordingly, any such work product must always be verified for accuracy when AI or GenAI is used.  

JUDICIAL OFFICERS SHOULD REMAIN COGNIZANT OF THE CAPABILITIES AND LIMITATIONS OF AI AND GENAI  

GenAI tools may use prompts and information provided to them to further train their model, and their developers may sell or otherwise disclose information to third parties. Accordingly, confidential or personally identifiable information (PII), health data, or other privileged or confidential information should not be used in any prompts or queries unless the user is reasonably confident that the GenAI tool being employed ensures that in-formation will be treated in a privileged or confidential manner. For all GenAI tools, users should pay attention to the tools’ settings, considering whether there may be good reason to retain, or to disable or delete, the prompt history after each session.  

Particularly when used as an aid to determine pretrial release decisions, consequences following a criminal conviction, and other significant events, how the AI or GenAI tool has been trained and tested for validity, reliability, and potential bias is critically important. Users of AI or GenAI tools for these foregoing purposes should exercise great caution.  

Other limitations or concerns include:  

  • The quality of a GenAI response will often depend on the quality of the prompt provided. Even responses to the same prompt can vary on different occasions.  
  • GenAI tools may be trained on information gathered from the Internet generally, or  
  • trained on non-copyrighted or authoritative legal sources.  
  • The terms of service for any GenAI tool used should always be reviewed for confidentiality, privacy, and security considerations.  

GenAI tools may provide incorrect or misleading information (commonly referred to as “hallucinations”). Accordingly, the accuracy of any responses must always be verified by a human.  

IMPLEMENTATION  

These Guidelines should be reviewed and updated regularly to reflect technological advances, emerging best practices in AI and GenAI usage within the judiciary, and improvements in AI and GenAI validity and reliability. As of February 2025, no known GenAI tools have fully resolved the hallucination prob-lem, i.e., the tendency to generate plausible-sounding but false or inaccurate information. While some tools perform better than others, human verification of all AI and GenAI outputs remains essential for all judicial use cases. 

This material is adapted from Hon. Herbert B. Dixon Jr. et al., Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers, 26 SEDONA CONF. J. 1 (forthcoming 2025), https://thesedonaconference.org/sites/default/files/publications/Navigating% 20AI%20in%20the%20Judiciary_PDF_021925.pdf.