chevron-down Created with Sketch Beta.

How the Future of AI will Impact Litigants, Lawyers, and Courts

Tandy Mathis, Karin M McGinnis, and Scott U Schlegel

Summary

  • Two veteran litigators and a state court judge weigh in on what role AI will play in the future of courts.
  • Courts are expected to increasingly embrace AI over the next few years for routine administrative tasks, legal research, and case management.
  • The increasing use of AI may cause some concern about authenticity in the e-discovery process for litigators. 
How the Future of AI will Impact Litigants, Lawyers, and Courts
VStock LLC/Tanya Constantine via Getty Images

Jump to:

As the revolution of AI continues to march forward, the impact of this technology will certainly be felt within our court system. New questions arise almost daily from lawyers, judges, clerks, and litigants, especially pro se litigants who are trying to harness the power of AI to help them represent themselves in a complex legal system.

For this roundtable, we asked several seasoned litigators and a state court judge to weigh in on what they are seeing, but also what they believe will be our future with AI in the courts. Karin M. McGinnis is co-head of Privacy & Data Security, Employment & Labor, and Litigation at Moore & Van Allen PLLC in Charlotte, North Carolina. Her colleague, Tandy B. Mathis, is Senior Counsel, Litigation, Discovery, and Privacy & Data Security at Moore & Van Allen. They have written and presented on a host of AI issues, including evolving legal trends. Judge Scott Schlegel was elected to the Fifth Circuit Court of Appeals in 2023. He currently serves on the ABA & LSBA Task Forces on Law and Artificial Intelligence.

1. Generative AI use among pro se litigants continues to rise, impacting courts and attorneys. What is the organized bar’s role in guiding how AI is used by consumers to resolve legal issues?

McGinnis & Mathis (M&M): Informing the court if a party has misquoted a case or cited a case that does not exist is part of an attorney’s obligation to zealously represent their client, and the fact that AI was the source of the error does not change that obligation. Likewise, if it is discovered that a pro se litigant was using AI to support proceedings in violation of court rules or in a way that is misleading, we have a duty to flag this. Educating pro se litigants may be key. With the risk of sanctions, pro se parties’ use of AI in a legal proceeding could ultimately put them at a disadvantage instead of helping them better advocate for themselves.

Judge Schlegel (SS): This is a great question. But I will leave this one to the various state supreme courts and state bars who are best positioned to guide the profession on this issue.

2. Would regulations assist the profession in governing the consumer use of AI in legal contexts? Does your opinion change if one party to the litigation is pro se

M&M: Except in connection with some unique evidentiary issues raised by AI, regulations are not necessary, but guidance from courts will be important. We think that the rules of evidence, the rules of civil procedure, including Rule 11, and the professional ethics rules governing attorneys primarily address the current risks of using AI in litigation. For example, in North Carolina, Rule 3.3 of the Rules of Professional Conduct prohibits lawyers from knowingly making a false statement of material fact or law to a tribunal, failing to correct a false statement of material fact or law previously made to the tribunal by the lawyer, or offering evidence that the lawyer knows to be false. Rule 8.4 states that engaging in “conduct involving dishonesty, fraud, deceit or misrepresentation that reflects adversely on the lawyer’s fitness as a lawyer” is professional misconduct. Rule 1.1 requires lawyers to be competent, and our State Bar applies that obligation to the use of technology.

Although ethics rules only apply to attorneys, Rule 11 of the Federal Rules of Civil Procedure and comparable state rules of civil procedure apply to both attorneys and to parties who sign documents submitted to the court. Under Rule 11, a signature on a submission to the court certifies that to the best of the signing party or attorney’s knowledge, formed after a reasonable inquiry, the “legal contentions are warranted by existing law or a nonfrivolous argument for extending, modifying, or reversing existing law or establishing new law” and that the factual contentions have, or will likely have, evidentiary support. Local court rules admonishing parties to certify that they have checked the accuracy of every statement and legal citation submitted to the court should not be necessary, but they are helpful in driving home the issue. Human oversight is a basic measure to mitigate the risks of inaccurate outputs by artificial intelligence. We are all used to running searches in our browsers for a quick answer. But pro se litigants might not be aware of the risk of inaccurate outputs and hallucinations by AI. A reminder of the risk and sanctions that a pro se party could incur not only educates a pro se party, it helps avoid extra cost and time in correcting a mess created by an AI-generated brief that was not proofed by a human.

SS: This is a great question. But I will leave this one to the various state supreme courts and state bars who are best positioned to guide the profession on this issue.

3. Courts have imposed rules on the use of AI in legal documents and proceedings, including disclosure requirements. Are these imposed rules necessary, or are existing rules of professional conduct sufficient to address the concerns of the court? 

M&M: For the most part, the existing rules of professional conduct are sufficient for attorneys. These rules govern licensed professionals. But when it comes to pro se litigants, a more direct discussion of the risks of AI would be beneficial. Non-lawyers representing themselves have access to cases, statutes, and regulations through the Internet, but may not know how to verify sources, may not be aware of the risks of AI, and may not be thinking about the serious implications under Rule 11. Many courts have guides for pro se litigation that describe in detail various important considerations. Adding guidance that specifically addresses the use of AI in proceedings before the court makes sense and can help avoid unnecessary expenditure of time and resources debating whether the pro se party should have known better.

SS: I’ve long argued that overregulating AI risks stifling innovation. Instead, we should focus on education over regulation, empowering attorneys to leverage AI responsibly within the robust ethical frameworks already in place, like ABA Model Rules 1.1, 1.6, and 3.3. Thankfully, both the Louisiana Supreme Court in its January 22, 2024, letter, and the ABA in its Formal Opinion 512, appear to agree. I encourage all attorneys to read them as they provide clear guidance for anyone considering the implementation of AI into their practice. Just like we didn’t shut down Zoom court after the infamous cat lawyer incident, we shouldn’t rush to overregulate AI because of a few high-profile hallucination examples.

4. As AI sophisticates, so does its ability to alter evidence and/or create inauthentic electronic evidence. What guardrails in the future do you predict will be needed, if any?

M&M: As AI becomes more prevalent, there is widespread concern about authenticity in the e-discovery process. Many routine AI-generated documents, such as a contract drafted by generative AI and reviewed and signed by the parties, should be admissible just like any other business-as-usual document. The issue arises when AI is used to fake or alter the original document, for example, using AI to create a look-alike invoice that alters a key term. Metadata should show if the evidence was altered or faked. Therefore, the requirements to preserve metadata for any electronic evidence remains critical.

Counsel will want to address issues of AI head-on in discovery. For example, interrogatories may ask a party to identify whether certain documents or evidence were generated by AI, and if so, the AI application used. If the output of the AI system is important to the case, discovery into the reliability and accuracy of the AI system is key. AI training data may become a standard discovery request, particularly in IP and privacy litigation. Regulations, like the Colorado AI Act, that require developers and deployers of AI systems to disclose to consumers that they are interacting with an AI system and require developers of AI systems to provide documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artifical intelligance system for risks of algorithmic discrimination, will create documents that could be useful in litigation to assess evidence generated by AI.

Expect to see issues with AI-generated evidence during trials. A rule requiring a party to disclose if an exhibit is AI-enhanced is reasonable so that the court knows when to further examine admissibility. Courts will need to assess when AI-enhancement of an exhibit, such as a blurry photo or a garbled voice recording, is unreliable or prejudicial. Like the introduction of email during trials 30 years ago, parties will be suspicious of AI-generated evidence. In fact, concern regarding the reliability of AI-generated evidence in trials has fueled the federal judiciary to propose changes to the Federal Rules of Evidence (“FRE”), and the Federal Judicial Conference’s Advisory Committee on Evidence Rules has voted to propose a new Rule 707 to the FRE. The proposed rule would, among other things, apply the standards of expert testimony to AI-generated evidence. This would mean that parties introducing AI-generated evidence would need to show that the data inputs relied upon by the AI system are adequate, that the AI system used reliable principles, and that the output is valid and reflects a reliable application of the principles and methods to the inputs. In the interim, we anticipate more parties relying on experts to support or challenge AI evidence when the evidence is key and admissibility is in question.

SS: Deepfakes and other forms of synthetic media present real challenges for courts tasked with assessing the reliability of digital evidence. In my article, 'Deepfakes in Court: Real-World Scenarios and Evidentiary Challenges,' I argue that while our current evidentiary rules are a good starting point, they weren’t designed for a world where convincing, AI-generated content can be produced with minimal effort. We need a combination of technical solutions alongside procedural reforms that empower judges to better assess the credibility of digital evidence. Additionally, I suggest that lawyers who have a reasonable suspicion that any digital evidence has been altered should be required to raise these issues pretrial. Further, the ethical standard that lawyers not knowingly offer false evidence should be raised to include those who knew or should have known the evidence was manipulated. The courts cannot be expected to spot fakes on their own.

5. Where do you think the court system will be on the use of AI over the next 5 years?

M&M: AI is not going to disappear, and courts, plaintiffs, and defendants are going to have to grapple with its implications for litigation. The wheels of justice can turn slowly, but developments in AI do not. It could be a few years before changes to state and federal rules of civil procedure and evidence are vetted and successfully implemented. By then, we expect to have a larger body of case law that will serve as guidance. As we have seen before, like Daubert and Kumho Tire influencing revisions to Rule 702 of the Federal Rules of Civil Procedure on experts, we anticipate that courts will be determining principles and setting precedents that will help inform revisions to evidentiary and procedural rules.

SS: In the next five years, I expect courts to increasingly embrace AI for routine administrative tasks, legal research, case management, and even some aspects of opinion drafting. At the same time, lawyers and law firms will have fully integrated generative AI into their practices, using these tools to streamline their workflows and enhance client services. However, the real test will be integrating these advancements without sacrificing the integrity of the justice system. Judges will need to strike a careful balance between innovation and maintaining human judgment if they want to ensure that justice prevails. 

    Authors