chevron-down Created with Sketch Beta.

GPSolo Magazine

GPSolo March/April 2025 (42:2): AI for Lawyers

AI in the Courtroom: How to Impress (Not Irritate) the Judge

Wesley B. Hazen

Summary

  • Lawyers should not expect judges to be technophobes, but neither should they expect the court to universally accept new technology without placing certain guidelines on its use.
  • Judges expect attorneys to disclose whether AI tools were used and how they contributed to the final legal work.
  • Attorneys must verify that everything presented to the court is true and correct—and they must be prepared to explain this verification process to the judge.
  • Judges expect attorneys to critically assess AI-generated content for fairness and impartiality.
AI in the Courtroom: How to Impress (Not Irritate) the Judge
gorodenkoff/iStock via Getty Images Plus

Jump to:

Technology and the law: two aspects of modern society that, ideally, would go hand in hand. All too often, however, this is not the case. In fact, it wasn’t until March 2020 that many law firms, courts, and legal practitioners realized they would be forced to adopt modern technology to continue business operations or face uncertainty due to the global COVID-19 pandemic. Meanwhile, another leap forward in the tech industry was emerging that would forever alter the way legal work is carried out: artificial intelligence (AI).

On November 30, 2022, OpenAI introduced the AI platform ChatGPT, and since that time, AI has become integrated into every industry the world over, including the legal field. Attorneys are now using AI to draft pleadings, carry out legal research, analyze documents, and analyze potential case or settlement outcomes. The increasing use of AI raises an important question for litigators: How will judges respond?

AI: May It Please the Court?

Lawyers should not expect judges to be technophobes, but neither should they expect the court to universally accept the adoption of new technology without placing certain guidelines on its use in the practice of law. Although some judges openly welcome and champion AI’s ability to enhance legal research and streamline case management, they are also reasonably wary of potential pitfalls in its unregulated use. Key concerns include:

  • Accuracy and reliability. AI-generated content must be verifiable, free from errors, and aligned with existing legal precedent. By now, we are all familiar with cautionary tales of negligent attorneys submitting to the court AI-generated documents that contained errors or entirely fabricated cases. For example, in 2024, an attorney in the U.S. Eastern District of Texas submitted a responsive document that contained numerous AI-generated case law citations in support of his client’s position, but he failed to ensure the factual accuracy and contextual accuracy of these cases, earning him a court-ordered sanction. A second, more recent incident, reported in February 2025, involved yet another Texas attorney who is facing a $15,000 sanction due to pleadings filed with fake citations in an ERISA case filed in the U.S. District Court for the Southern District of Indiana. In response to an order to show cause, the attorney in question failed to properly account for the faulty case cites and instead simply apologized for the error. The misstated case prompted the judge to undertake a “non-exhaustive search,” which produced two other instances of faulty case citations in other filings with the court.
  • Transparency. Judges expect attorneys to disclose whether AI tools were used and how they contributed to the final legal work. This act of being more open with the court can be as simple as one or two sentences, within a brief or pleading, stating that part of the document was drafted with the assistance of AI but that the filing attorney has verified its accuracy, reliability, and conformity with all applicable ethical rules. It is not unforeseeable that courts might require a formal certification to this effect at the end of filed documents, although no such certification exists to date.
  • Ethical considerations. The use of AI must comply with professional conduct and procedural rules, ensuring that AI does not mislead courts or compromise a client’s case or confidentiality. Federal Rule of Civil Procedure 11, for example, requires attorneys or parties to sign all filings to certify to the court that the filer is being completely honest and forthcoming. Candor to the court and compliance with all applicable rules are crucial.

Below are a few of the applications for which AI is being heavily used by litigators, along with some helpful hints to ensure that AI is used in a responsible way that impresses judges—rather than irritating them. To evaluate AI’s capability for formulating logical conclusions, I drafted these hints with the facilitation of ChatGPT, using the prompt “How can attorneys utilize AI in practice in a responsible and transparent manner?” The results were, for the most part, common sense and rather basic. Following conversations with other legal practitioners and members of the judiciary, I then elaborated on ChatGPT’s responses. This process shows that the human element is still essential.

 

AI in Legal Research: Key Considerations

Legal research is one of AI’s most transformative and widely used applications in the legal field. AI-powered resources can rapidly analyze vast databases of case law, statutes, and secondary sources. However, to satisfy judicial and client expectations, attorneys must:

  • Verify all AI-sourced information. AI tools can sometimes hallucinate facts or fabricate citations. Attorneys must cross-check AI-generated references against the actual primary or secondary legal sources they claim to cite.
  • Understand the AI’s methodology. Judges may inquire about how an AI tool reached a particular conclusion. Attorneys should be prepared to explain the technology’s data sources and limitations, as well as the attorney’s actions to verify said conclusion. For example, ChatGPT has a “General FAQ” page and a “Help” section to assist users in better understanding and using the service. It states there that ChatGPT uses multiple sources: (1) public information from the Internet, (2) information provided by third-party partners, and (3) information that their researchers or users provide or generate. Unfortunately, this description is not particularly illuminating. AI systems explicitly created for legal research might offer more helpful information about their sources. Regardless of the AI system used, attorneys must be prepared to elaborate to the court on the actions they took to verify the AI-provided citations and conclusions. This is where the rubber meets the road. Attorneys must search for those cases or statutes to ensure that they not only actually exist but that they say what the AI tool purports they say. In essence, attorneys should trust the AI tool’s abilities, but they must verify that everything presented to the court is true and correct—and they must be prepared to explain this verification process to the judge.
  • Maintain professional judgment. AI is a research aid, not a replacement for legal reasoning or analysis. Attorneys must ensure AI-generated findings align with their legal expertise and ethical obligations.

AI in Document Drafting: Balancing Efficiency and Accountability

AI-assisted drafting tools can significantly reduce the time required to create briefs, contracts, and other legal documents. However, when using AI for drafting, attorneys should:

  • Ensure clarity and precision. AI-generated text should be scrutinized for clarity, consistency, and adherence to legal standards.
  • Avoid plagiarism and bias. Judges expect original arguments, not AI-regurgitated content. Additionally, AI models may inadvertently reinforce biases present in training data.
  • Disclose AI usage when necessary. Some courts may require disclosure of AI assistance in drafting legal documents. Understanding judicial rules in different jurisdictions is crucial.

Predictive Analytics in Cases: A Double-Edged Sword?

Predictive analytics can help attorneys anticipate case or settlement outcomes based on historical judicial rulings and legal trends. While powerful, this AI application raises ethical and strategic questions:

  • Judicial skepticism of predictive tools. Judges may be hesitant to accept arguments that rely too heavily on AI-generated predictions rather than sound legal reasoning.
  • Data limitations. Predictive AI models depend on past rulings, which may not fully capture evolving legal standards for good and bad case law or unique case nuances.
  • Transparency and fairness. Attorneys using predictive analytics should be prepared to explain their methodology and ensure it does not create unfair advantages or obscure the human element of legal advocacy.

Expanding AI’s Role in Legal Practice

Beyond research and drafting, AI is increasingly shaping case strategy, client communications, and even courtroom presentations. Some firms are exploring AI-driven argument generation, virtual legal assistants, and AI-mediated dispute resolution. While these innovations offer great promise, they also demand increased scrutiny from the bench.

  • AI and judicial decision-making. Some jurisdictions have begun experimenting with AI to aid judicial decision-making. Although AI will not replace judges, it may assist in analyzing case law trends and ensuring consistency in rulings. Attorneys must be prepared to argue cases in a system where AI may play a role in litigation.
  • AI for client counseling. AI tools are becoming prevalent in advising clients, especially in compliance-heavy areas of law, such as tax law, securities regulation, and intellectual property. However, attorneys must ensure that clients understand the limitations of AI-generated advice and the need for human oversight and independent analysis. Attorneys also must be prepared to interact with clients who use AI to “do their own research” about their case. Colleague attorneys have reported getting phone calls from clients questioning if their case should proceed or not based on an AI platform’s determination of the “likely outcome” given the facts of the case. Attorneys must counsel clients that, although facts are critical to any case, they are not the sole factor—or even the most important factor—in a case’s outcome.
  • AI in alternative dispute resolution. AI is being used to analyze settlement probabilities and suggest optimal negotiation strategies. However, ethical concerns persist regarding transparency and fairness, particularly in high-stakes or uneven bargaining situations.

AI and Legal Ethics: Navigating the Gray Areas

While AI offers many advantages, it also raises serious ethical questions. Key concerns include:

  • Bias and discrimination. AI models can reflect and amplify biases present in source data. Judges expect attorneys to critically assess AI-generated content for fairness and impartiality.
  • Confidentiality and data security. Using AI often involves processing large volumes of sensitive client data. Ensuring compliance with privacy laws and ethical duties is paramount.
  • Professional responsibility. The duty of competence now includes understanding AI’s capabilities and limitations. Attorneys who rely on AI without proper knowledge risk breaching ethical standards.

Best Practices for AI-Aided Legal Work

To ensure that AI-assisted legal work meets judicial expectations, attorneys should adopt the following best practices:

  1. Use AI as an aid, not a crutch. AI should enhance, not replace, traditional legal skills and analysis.
  2. Trust but verify. Always fact-check AI-generated content against authoritative legal sources.
  3. Disclose AI usage when required. Understand the disclosure rules of the court and jurisdiction where you are practicing.
  4. Maintain ethical and professional standards. AI should never be used to mislead the court or compromise a client’s case.
  5. Stay informed on AI developments. Technology evolves rapidly; staying up-to-date ensures competence and adherence to judicial expectations.
  6. Develop internalized AI policies. Law firms should establish internal policies for using AI to ensure consistent and ethical concerns are addressed properly.
  7. Train legal professionals and staff on AI. Continuing legal education should incorporate AI literacy, enabling attorneys and staff to use these tools effectively and responsibly.
  8. Engage in judicial dialogues on AI. Attorneys should participate in legal forums and discussions to ensure AI usage aligns with judicial expectations.

What We Owe to Our Clients, the Court, and Ourselves

As attorneys and even as self-represented litigants, we possess the inalienable ability to shape legal precedent for generations to come, whether it be locally or on a national level. We owe it not only to our clients to be responsible in our processes but also to the countless other individuals who may rely on our work product to aid their case. Further, we owe it to the judge and jury to ensure we are doing everything in our power to ensure our work product is accurate and effective so that they can carry out their respective duties to resolve the case at hand in the most fair and impartial manner possible. Above all else, we owe it to ourselves and our internal principles and external reputation to slow down, examine the information that AI generates for us, and verify its accuracy in support of our position.

    Author