chevron-down Created with Sketch Beta.

ARTICLE

Don’t Find Out What You Don’t Know about AI the Hard Way

Andrew McLure Toft

Don’t Find Out What You Don’t Know about AI the Hard Way
Patrick Daxenbichler via Getty Images

While I was ruminating (coupled with procrastination) about how to draft this Practice Point, a client sent me citations to several cases the client found while doing legal research using ChatGPT for my use in the client’s case. The “summaries” of each case accompanying the citations showed that each was supportive of the client’s side of the case. Case names, citations, the year of each decision, and the court that rendered the decision. Having read the cases discussed below, I used my standard legal research tool to search for the cases. Searches using the case names were unsuccessful, as were searches using the citations provided. The cases don’t exist. Regrettably, some lawyers have relied on AI-generated legal research to their detriment, and then compounded the problem by lying about it.

People v. Crabill, 23PDJ067 (2023) is a Colorado disciplinary case. Mr. Crabill was admitted to practice law less than two years before the events leading to his two-year suspension—with all but 90 days stayed pending successful completion of a two-year probationary period—occurred. Crabill was hired to set aside a civil judgment entered against a client. In the motion, Crabill cited case law found using ChatGPT, but he did not read the cases or otherwise verify the accuracy of the citations before filing the motion with the court. Before the hearing on the motion, he discovered that the cases were either incorrect or fictitious but did not withdraw the motion or so inform the court during the hearing. The judge voiced concerns about the accuracy of the cases, and Crabill sealed his fate by falsely attributing the mistakes to a paralegal. Six days later, he informed the court by affidavit that he used ChatGPT when drafting the motion. Crabill, with counsel, entered into a stipulation to discipline and the ensuing order stated he had violated Colorado Rules of Professional Conduct 1.1; 1.3; 3.3(a)(1); and 8.4(c). The discipline was not for the use of ChatGPT in and of itself, but rather for violating the duty of competence and diligence to the client, as well as his duty of honesty and candor to the court. There were a number of mitigating factors, but these were outweighed by the aggravating factors of dishonesty and a selfish motive, among others. Crabill has successfully completed the 90-day period of imposed suspension.

Mata v. Avianca, Inc., 22-cv-1461 (PKC) (S.D. N.Y. June 22, 2023) is a second case that shows the perils of the unbridled use of AI-generated legal research and a lack of candor with the court. The court found bad faith on the part of the individual attorneys involved based on “acts of conscious avoidance and false and misleading statements to the Court.” Based on Federal Rule of Civil Procedure 11(c)(1), the court jointly sanctioned the firm the lawyers worked for. The opinion and order on sanctions could be used as a case study in any ethics course.

Avianca’s attorney filed a motion to dismiss Mr. Mata’s claims as time-barred. Mata’s attorneys, after seeking and being granted a one-month extension, filed a responsive pleading that cited and quoted from a number of “decisions” and bore the statement above counsel’s signature line “I declare under penalty of perjury that the foregoing is true and correct.” The attorney who signed the pleading had not done the research or written the pleading, and he did not read the cases cited. He relied on a colleague of over 20 years. Avianca filed a reply stating that it had been unable to find most of the cases cited, and those it had found did not stand for the propositions cited. The court then did its own research and could not find many of the cases cited. The lawyer who had done the research and written the pleading had used ChatGPT. Even after reading Avianca’s reply, the lawyer said he was

“operating under the false perception that this website [i.e., ChatGPT] could not possibly be fabricating cases on its own.” (Tr. at 31.) He stated, “I just was not thinking that the case could be fabricated, so I was not looking at it from that point of view.” (Tr. at 35.) “My reaction was, ChatGPT is finding that case somewhere. Maybe it's unpublished. Maybe it was appealed. Maybe access is difficult to get. I just never thought it could be made up.” (Tr. at 33.)

The court ordered Mata’s counsel to provide copies of the decisions, which they did but again without the lawyer who signed the affidavit in response to the court’s order reading it or any of the attachments. Counsel doubled down and lost again for the same reasons. The court imposed sanctions under Rule 11 that were both monetary and professionally embarrassing if not damaging.

As with Crabill, punishment was not for the use of AI for legal research, it was for the lack of candor with the court. Anyone who relies on ChatGPT in their practice should read Mata to see just how detrimental use of this technology can be without an understanding of the technology’s limitations. Avoid the professional and public embarrassment, cited by the court in its order, these lawyers brought on themselves.

    Author