chevron-down Created with Sketch Beta.

The Judges' Journal

The Importance of State Constitutions

Guidelines for Judicial Officers: Responsible Use of Artificial Intelligence

Herbert B Dixon Jr

Summary

  • Judge Dixon and a group of judges and scholars developed written guidance for judges regarding the use of artificial intelligence (AI) entitles “Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers.”
  • The guidelines emphasize the importance of understanding that an essential element of judicial decision-making is human judgment, for which judges must remain vigilant to ensure that AI is a tool and not a replacement. 
  • The guidelines strike a proper balance in discussing AI’s benefits versus a judge’s need for caution when using AI.
Guidelines for Judicial Officers: Responsible Use of Artificial Intelligence
fotosipsak via Getty Images

Jump to:

I am a beneficiary of great work by a group of folks I associate with. The group? Five judges and a lawyer/computer science professor. As a group project, we were searching for examples of written guidance for judges regarding the use of artificial intelligence (AI). After collecting a few promising samples, Judge Scott Schlegel mused: What if we draft a model AI usage policy for judges and clerks? Professor Maura Grossman immediately responded that she liked that idea a lot! Thereafter, Judges Allison Goddard, Xavier Rodriguez, Samuel Thumma, and I joined the bandwagon, enthusiastically indicating our interest in the project. After several months of back and forth, and a few compromises along the way, we completed our project and approached potential publishers. And, now, the rest is history. The Sedona Conference is the first to announce the publication of the culmination of our group’s work, a framework for responsible use of AI titled “Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers.”

It is humbling to think that this project started as a “what if” idea to develop an AI usage policy for judges and clerks. Readers should understand that these guidelines are not the completion of a mission. They represent a starting point: a framework for the responsible use of AI. In summary, these guidelines represent our group’s consensus when we released them for publication.

Notwithstanding the consensus nature of the guidelines, I, a technology writer who strives to avoid excessive technicalities, wholeheartedly endorse the opening paragraph of the guidelines that they “are intended to provide general, non-technical advice about the use of AI and generative artificial intelligence (GenAI) by judicial officers.” Hopefully, the opening paragraph is a signal, even to the so-called technophobe, that the guidelines were written for all judicial users of this transformative technology.

One of the issues our group debated as we put the finishing touches on the guidelines was whether we struck a proper balance in our discussion of AI’s benefits versus the judges’ need for caution when using AI. While judicial use of AI can increase productivity, the guidelines emphasize the importance of understanding that an essential element of judicial decision-making is human judgment, for which judges must remain vigilant to ensure that “AI serves as a tool to enhance, not replace, their fundamental judicial responsibilities.” The guidelines particularly note that “when judicial officers obtain information, analysis, or advice from AI or GenAI tools, they risk relying on extrajudicial information and influences that the parties have not had an opportunity to address or rebut.” Accordingly, to promote public confidence in the justice system, judges “must ensure that any use of AI strengthens rather than compromises the independence, integrity, and impartiality of the judiciary.”

Additionally, although I have previously emphasized the need for judicial officers to exercise due diligence before accepting any output created with the aid of artificial intelligence, the guidelines forcefully state that an “independent, competent, impartial, and ethical judiciary is indispensable to justice in our society” and that this “foundational principle recognizes that judicial authority is vested solely in judicial officers, not in AI systems.” Accordingly, “judicial officers must remain faithful to their core obligations of maintaining professional competence, upholding the rule of law, promoting justice, and adhering to applicable Canons of Judicial Conduct.”

Finally, the guidelines note that human verification of all AI outputs remains essential because, when our group released the guidelines for publication, no known AI tools had fully resolved the problem of misleading or fabricated AI responses, which AI creators euphemistically call hallucinations. So, my public response to our internal debate is: No! We did not overemphasize the need for caution when judges use AI. The guidelines strike a proper balance in discussing AI’s benefits versus a judge’s need for caution when using AI.

Final Note: The guidelines are available online. However, for ease of reference, the full text of the guidelines is reprinted below.

Guidelines for U.S. Judicial Officers Regarding the Responsible Use of Artificial Intelligence

These Guidelines are intended to provide general, non-technical advice about the use of AI and GenAI by judicial officers and those with whom they work in state and federal courts in the United States. As used here, AI describes computer systems that perform tasks normally requiring human intelligence, often using machine-learning techniques for classification or prediction. GenAI is a subset of AI that, in response to a prompt (i.e., query), generates new content, which can include text, images, sound, or video. While the primary impetus and focus of these Guidelines is GenAI, many of the use cases that are described below may involve either AI or GenAI, or both. These Guidelines are neither intended to be exhaustive nor the final word on this subject.

Fundamental Principles

An independent, competent, impartial, and ethical judiciary is indispensable to justice in our society. This foundational principle recognizes that judicial authority is vested solely in judicial officers, not in AI systems. While technological advances offer new tools to assist the judiciary, judicial officers must remain faithful to their core obligations of maintaining professional competence, upholding the rule of law, promoting justice, and adhering to applicable Canons of Judicial Conduct.

In this rapidly evolving landscape, judicial officers and those with whom they work must ensure that any use of AI strengthens rather than compromises the independence, integrity, and impartiality of the judiciary. Judicial officers must maintain impartiality and an open mind to ensure public confidence in the justice system. The use of AI or GenAI tools must enhance, not diminish, this essential obligation.

Although AI and GenAI can serve as valuable aids in performing certain judicial functions, judges remain solely responsible for their decisions and must maintain proficiency in understanding and appropriately using these tools. This includes recognizing that when judicial officers obtain information, analysis, or advice from AI or GenAI tools, they risk relying on extrajudicial information and influences that the parties have not had an opportunity to address or rebut.

The promise of GenAI to increase productivity and advance the administration of justice must be balanced against these core principles. An overreliance on AI or GenAI undermines the essential human judgment that lies at the heart of judicial decision-making. As technology continues to advance, judicial officers must remain vigilant in ensuring that AI serves as a tool to enhance, not replace, their fundamental judicial responsibilities.

Judicial officers and those with whom they work should be aware that GenAI tools do not generate responses like traditional search engines. GenAI tools generate content using complex algorithms, based on the prompt they receive and the data on which the GenAI tool was trained. The response may not be the most correct or accurate answer. Further, GenAI tools do not engage in the traditional reasoning process used by judicial officers. And, GenAI does not exercise judgment or discretion, which are two core components of judicial decision-making. Users of GenAI tools should be cognizant of such limitations.

Users must exercise vigilance to avoid becoming “anchored” to the AI’s response, sometimes called “automation bias,” where humans trust AI responses as correct without validating their results. Similarly, users of AI need to account for confirmation bias, where a human accepts the AI results because they appear to be consistent with the beliefs and opinions the user already has. Users also need to be aware that, under local rules, they may be obligated to disclose the use of AI or GenAI tools, consistent with their obligation to avoid ex parte communication.

Ultimately, judicial officers are responsible for any orders, opinions, or other materials that are produced in their name. Accordingly, any such work product must always be verified for accuracy when AI or GenAI is used.

Judicial Officers Should Remain Cognizant of the Capabilities and Limitations of AI and GenAI

GenAI tools may use prompts and information provided to them to further train their model, and their developers may sell or otherwise disclose information to third parties. Accordingly, confidential or personally identifiable information (PII), health data, or other privileged or confidential information should not be used in any prompts or queries unless the user is reasonably confident that the GenAI tool being employed ensures that information will be treated in a privileged or confidential manner. For all GenAI tools, users should pay attention to the tools’ settings, considering whether there may be good reason to retain, disable, or delete the prompt history after each session.

Particularly when used as an aid to determine pretrial release decisions, consequences following a criminal conviction, and other significant events, how the AI or GenAI tool has been trained and tested for validity, reliability, and potential bias is critically important. Users of AI or GenAI tools for these foregoing purposes should exercise great caution.

Other limitations or concerns include:

  • The quality of a GenAI response will often depend on the quality of the prompt provided. Even responses to the same prompt can vary on different occasions.
  • GenAI tools may be trained on information gathered from the internet generally, or proprietary databases, and are not always trained on non-copyrighted or authoritative legal sources.
  • The terms of service for any GenAI tool used should always be reviewed for confidentiality, privacy, and security considerations.

GenAI tools may provide incorrect or misleading information (commonly referred to as “hallucinations”). Accordingly, the accuracy of any responses must always be verified by a human.

Potential Judicial Uses for AI or GenAI

Subject to the considerations set forth above:

  • AI and GenAI tools may be used to conduct legal research, provided that the tool was trained on a comprehensive collection of reputable legal authorities and the user bears in mind that GenAI tools can make errors;
  • GenAI tools may be used to assist in drafting routine administrative orders;
  • GenAI tools may be used to search and summarize depositions, exhibits, briefs, motions, and pleadings;
  • GenAI tools may be used to create timelines of relevant events;
  • AI and GenAI tools may be used for editing, proofreading, or checking spelling and grammar in draft opinions;
  • GenAI tools may be used to assist in determining whether filings submitted by the parties have misstated the law or omitted relevant legal authority;
  • GenAI tools may be used to generate standard court notices and communications;
  • AI and GenAI tools may be used for court scheduling and calendar management;
  • AI and GenAI tools may be used for time and workload studies;
  • GenAI tools may be used to create unofficial/preliminary, real-time transcriptions;
  • GenAI tools may be used for unofficial/preliminary translation of foreign-language documents;
  • AI tools may be used to analyze court operational data and routine administrative workflows, and to identify efficiency improvements;
  • AI tools may be used for document organization and management;
  • AI and Gen AI tools may be used to enhance court accessibility services, including assisting self-represented litigants.

Implementation

These Guidelines should be reviewed and updated regularly to reflect technological advances, emerging best practices in AI and GenAI usage within the judiciary, and improvements in AI and GenAI validity and reliability. As of February 2025, no known GenAI tools have fully resolved the hallucination problem, i.e., the tendency to generate plausible sounding but false or inaccurate information. While some tools perform better than others, human verification of all AI and GenAI outputs remains essential for all judicial use cases.

    Author