As the revolution of AI continues to march forward, the impact of this technology will certainly be felt within our court system. New questions arise almost daily from lawyers, judges, clerks, and litigants, especially pro se litigants who are trying to harness the power of AI to help them represent themselves in a complex legal system.
For this roundtable, we asked several seasoned litigators and a state court judge to weigh in on what they are seeing, but also what they believe will be our future with AI in the courts. Karin M. McGinnis is co-head of Privacy & Data Security, Employment & Labor, and Litigation at Moore & Van Allen PLLC in Charlotte, North Carolina. Her colleague, Tandy B. Mathis, is Senior Counsel, Litigation, Discovery, and Privacy & Data Security at Moore & Van Allen. They have written and presented on a host of AI issues, including evolving legal trends. Judge Scott Schlegel was elected to the Fifth Circuit Court of Appeals in 2023. He currently serves on the ABA & LSBA Task Forces on Law and Artificial Intelligence.
1. Generative AI use among pro se litigants continues to rise, impacting courts and attorneys. What is the organized bar’s role in guiding how AI is used by consumers to resolve legal issues?
McGinnis & Mathis (M&M): Informing the court if a party has misquoted a case or cited a case that does not exist is part of an attorney’s obligation to zealously represent their client, and the fact that AI was the source of the error does not change that obligation. Likewise, if it is discovered that a pro se litigant was using AI to support proceedings in violation of court rules or in a way that is misleading, we have a duty to flag this. Educating pro se litigants may be key. With the risk of sanctions, pro se parties’ use of AI in a legal proceeding could ultimately put them at a disadvantage instead of helping them better advocate for themselves.
Judge Schlegel (SS): This is a great question. But I will leave this one to the various state supreme courts and state bars who are best positioned to guide the profession on this issue.
2. Would regulations assist the profession in governing the consumer use of AI in legal contexts? Does your opinion change if one party to the litigation is pro se?
M&M: Except in connection with some unique evidentiary issues raised by AI, regulations are not necessary, but guidance from courts will be important. We think that the rules of evidence, the rules of civil procedure, including Rule 11, and the professional ethics rules governing attorneys primarily address the current risks of using AI in litigation. For example, in North Carolina, Rule 3.3 of the Rules of Professional Conduct prohibits lawyers from knowingly making a false statement of material fact or law to a tribunal, failing to correct a false statement of material fact or law previously made to the tribunal by the lawyer, or offering evidence that the lawyer knows to be false. Rule 8.4 states that engaging in “conduct involving dishonesty, fraud, deceit or misrepresentation that reflects adversely on the lawyer’s fitness as a lawyer” is professional misconduct. Rule 1.1 requires lawyers to be competent, and our State Bar applies that obligation to the use of technology.
Although ethics rules only apply to attorneys, Rule 11 of the Federal Rules of Civil Procedure and comparable state rules of civil procedure apply to both attorneys and to parties who sign documents submitted to the court. Under Rule 11, a signature on a submission to the court certifies that to the best of the signing party or attorney’s knowledge, formed after a reasonable inquiry, the “legal contentions are warranted by existing law or a nonfrivolous argument for extending, modifying, or reversing existing law or establishing new law” and that the factual contentions have, or will likely have, evidentiary support. Local court rules admonishing parties to certify that they have checked the accuracy of every statement and legal citation submitted to the court should not be necessary, but they are helpful in driving home the issue. Human oversight is a basic measure to mitigate the risks of inaccurate outputs by artificial intelligence. We are all used to running searches in our browsers for a quick answer. But pro se litigants might not be aware of the risk of inaccurate outputs and hallucinations by AI. A reminder of the risk and sanctions that a pro se party could incur not only educates a pro se party, it helps avoid extra cost and time in correcting a mess created by an AI-generated brief that was not proofed by a human.
SS: This is a great question. But I will leave this one to the various state supreme courts and state bars who are best positioned to guide the profession on this issue.
3. Courts have imposed rules on the use of AI in legal documents and proceedings, including disclosure requirements. Are these imposed rules necessary, or are existing rules of professional conduct sufficient to address the concerns of the court?
M&M: For the most part, the existing rules of professional conduct are sufficient for attorneys. These rules govern licensed professionals. But when it comes to pro se litigants, a more direct discussion of the risks of AI would be beneficial. Non-lawyers representing themselves have access to cases, statutes, and regulations through the Internet, but may not know how to verify sources, may not be aware of the risks of AI, and may not be thinking about the serious implications under Rule 11. Many courts have guides for pro se litigation that describe in detail various important considerations. Adding guidance that specifically addresses the use of AI in proceedings before the court makes sense and can help avoid unnecessary expenditure of time and resources debating whether the pro se party should have known better.
SS: I’ve long argued that overregulating AI risks stifling innovation. Instead, we should focus on education over regulation, empowering attorneys to leverage AI responsibly within the robust ethical frameworks already in place, like ABA Model Rules 1.1, 1.6, and 3.3. Thankfully, both the Louisiana Supreme Court in its January 22, 2024, letter, and the ABA in its Formal Opinion 512, appear to agree. I encourage all attorneys to read them as they provide clear guidance for anyone considering the implementation of AI into their practice. Just like we didn’t shut down Zoom court after the infamous cat lawyer incident, we shouldn’t rush to overregulate AI because of a few high-profile hallucination examples.