chevron-down Created with Sketch Beta.

Generative AI in the Courts: Dream or Disaster

Jennifer Fite

Summary 

  • Courts and states are taking varied stances on AI use, from bans to full integration, with ongoing discussions about disclosure, confidentiality, and legal responsibility.
  • Lawyers and judges must now develop a working knowledge of AI’s strengths and limitations, ensuring it enhances, rather than compromises, their ethical and professional duties.
Generative AI in the Courts: Dream or Disaster
iStock.com/imaginima

Jump to:

The legal profession is not known as an early adopter of technology, but technology advances whether the legal profession wants to adopt it or not. Artificial intelligence (AI)––the use of computer “thinking” in ways that used to require human intelligence––has been around for decades. Generative AI (GenAI) is a subset of AI that learns the patterns and structures of its training data and produces text, images, videos or other forms of data. GenAI has exploded into popular use in recent years for everything from creating a meal plan to crafting a business proposal.

Using GenAI in everyday life and using it in your professional legal career are different matters. Perhaps you do not care if ChatGPT knows that you are allergic to peanuts and you are willing to risk the limited chance that some evil ne’er-do-well will use that information against you. But attorneys, judges, the courts and the legal system have heightened duties to the public and each other that require us to think much harder about how we use AI. The use of GenAI in courts can enhance legal proceedings, increase efficiency and perhaps level the playing field in the way the law is applied to make it more equitable. But, as with any technology, the seemingly infinite possibilities of GenAI also come with a range of challenges and ethical concerns.

The Current Framework of Regulation

While the law is slow to adopt new technology, the law is even slower still to understand, regulate and incorporate it into everyday legal practice. With that in mind, it should come as no surprise that states’ determinations on whether and how to use AI in the courtroom has been all over the place.

U.S. Supreme Court Justice John Roberts struck an inconclusive tone about GenAI in his 2023 end-of-the-year report. He said that GenAI had the potential to increase access to justice for poor and indigent litigants, revolutionize legal research and assist courts in resolving cases more quickly and inexpensively while also pointing to privacy concerns and the current technology's inability to replicate human discretion. He urged caution and humility as the evolving technology transforms legal work.

Some states have attempted to expressly forbid the use of GenAI in the courtroom and pleadings. Others begrudgingly permit its use but require the disclosure of the use of GenAI tools. A minority have adopted the ideology that AI use is no different than using a paralegal or associate and that users need not disclose it at all.

Judges nationwide have been individually grappling with the rapid rise of GenAI platforms like the ubiquitous ChatGPT and how to regulate it in their court proceedings. Attorneys have run afoul of the courts in several widely publicized instances, including New York lawyers who were sanctioned after filing a brief with six AI-generated case citations and later misleading the court. A Colorado lawyer was suspended over a similar episode. Lawyers said they misunderstood the technology.

In 2024, a U.S. district judge of the Northern District of Texas became one of the first to require lawyers to certify they did not use GenAI to draft their filings without a human checking their accuracy.

The most generous approach to AI has been adopted by states like Illinois. As of January 1, 2025, judges and attorneys may be expected to use AI and it should “not be discouraged” if it complies with legal and ethical standards. Delaware has taken the approach that GenAI platforms must be approved by their administrative office to be used in the courts. Both the 3rd Circuit and 9th Circuit (the nation’s largest federal appeals court) appeals courts have established committees to examine how AI will affect the courts. GenAI is evolving at far greater speed than the regulatory authorities tasked with monitoring it. With conflicting and confusing court orders, local rules and ethical rules specific to the use of GenAI, some scholars argue that rules specific to AI are ill-advised.

Legal and Ethical Considerations for the Use of GenAI

Among the states that have created a framework to allow or encourage the use of GenAI, they commonly consider several of the same factors.

Accountability for the final work product. The end user of GenAI must ensure accuracy and compliance with legal and ethical obligations. Judges and users remain ultimately responsible for their decisions, even if they have used GenAI to assist them with reaching those decisions. Researchers have found that AI chatbots powered by large language models (LLMs) like ChatGPT can generate random falsehoods in its generated content––sometimes referred to as GenAI hallucination. These hallucinations can happen a third of the time or more. AI-generated output can also contain factual errors or self-contradictory statements. And the LLMs memorize data that it has learned, causing the risk of plagiarism to be high as well.

Lawyers are responsible for meritorious claims and candor to the tribunal––Model Rules 3.1, 3.3 and 8.4(c) deal with not bringing frivolous claims, making or failing to correct a false statement made to a tribunal and avoiding misrepresentations. This puts an additional onus on the attorneys to review the analysis, citations and statements of law and fact before presenting AI-generated content to the court.

Using GenAI creates additional Supervisory Responsibilities under Model Rules 5.1 and 5.3. Judges and attorneys are responsible for the actions of their subordinates and managing lawyers must establish clear policies on GenAI use, ensure training and become versed on the ethical and practical uses of GenAI. This lofty goal adds another layer of responsibility––and perhaps another risk of malpractice.

Disclosure of use. Must the attorney or judge using GenAI disclose that they have done so? Illinois and Florida say no. A judge in Texas says yes. The Fifth Circuit proposed an amendment that would require filers to check a box stating whether they had used GenAI, but public comments to the rule were mostly negative. These discrepancies will likely make their way into local rules if the state courts don’t reach a comprehensive decision.

Lawyers must also communicate with clients––Model Rule 1.4(a)(2) requires reasonable consultation with the client about how the client’s objectives are to be accomplished. As always, whether and when a lawyer must disclose GenAI use to a client depends. Certainly, if asked, we must tell the truth. Lawyers must also consult with clients when the use of a GenAI tool is relevant to the basis or reasonableness of a lawyer’s fee or when the output will influence a significant decision in representation such as evaluating potential litigation outcomes.

Understanding/informed use. Courts ask users not to use GenAI without a working knowledge and understanding of general AI capabilities and training in the technical capabilities and limitations of the specific GenAI tool prior to use. This might be the most Herculean task for attorneys, but it goes to the basic ethical rule of Competence (Model Rule 1.1). Lawyers don’t have to be experts in GenAI, but they must know what’s available, have a reasonable understanding of the specific technology and make an informed decision using their professional judgement as to whether or not to use these tools. Learn the tools and stay up to date because GenAI is evolving quickly. Be aware that if the training of the tool is based on limited, outdated or biased content, the output might be unreliable, incomplete or discriminatory. Because GenAI can’t understand meaning or evaluate context, legal professionals can’t rely exclusively on GenAI for tasks that require professional judgment.

Learning how to talk to GenAI is going to be another language for attorneys and judges to master. “Prompts”––which is the information entered into AI to receive a result––can vary widely in quality, which means that the attorneys most knowledgeable about how to craft a prompt will likely receive the best results. Try these two different prompts in your favorite GenAI tool: “Create a list of commands that an attorney might use to give instructions to artificial intelligence” versus “Create a list of commands that a family law attorney might use to give instructions to artificial intelligence.” The results from these two prompts are worlds apart.

Confidentiality. The output of GenAI is only as good as the information put into it. But confidentiality requires that we protect the privacy and confidentiality of information we are privileged to access. Just because GenAI is a nameless, faceless program, doesn’t allow us to compromise sensitive information such as confidential communications, personally identifiable information, protected health information, justice and public safety data, security related information, information conflicting with judicial conduct standards or eroding public trust. Unless you are certain that the GenAI you are using is a closed system––meaning it takes and analyzes only the information given to it by your organization and doesn’t use your organization’s information to analyze for other entities, it’s impossible to maintain confidentiality while using specific data. Even in a closed system, self-learning tools increase the risk might expose information between court staff or other firm lawyers to disclosure in inappropriate ways. At a minimum, the best practice would be to obtain a client’s informed consent prior to inputting information relating to representation into a GenAI tool. The lawyer must explain risks and benefits in common language beyond a general boilerplate provision in engagement letters.

How GenAI Is Being Used in Courts

Legal research is perhaps the most ubiquitous use of GenAI, but it has been used for creating form divorces, drafting pleadings and discovery, creating exhibits and locating errors in documents. Some courts are looking into using GenAI to screen financial reports in guardianship matters for potential red flags.

One of the unique ways GenAI is being used in the courts is to calculate how long a person will spend in jail for a crime. Proponents argue that it reduces bias in the sentencing process, can quickly and easily analyze available data and statistics on recidivism and reach a neutral decision on sentencing. Detractors argue that the neutrality of GenAI results is only as good as the data it uses and that a human decision will result in more equitable sentencing. In criminal cases, where a large amount of data is available and tracked, a large volume of cases makes their way through the courts, a speedy result is necessary and many of the participants lack time or resources, AI can certainly improve access to justice if used judiciously.

Ultimately, the overriding objective of guidelines on GenAI is to get the courts and lawyers to exercise their professional judgment. Understand it and use it with caution. Think about how to apply the existing ethical rules to this new technology. Explore it or avoid it at your own risk. As with most new technology, the legal profession will have to find a way to incorporate it and adjust. 

    Author