The real problem is not whether machines think but whether men do. —B.F. Skinner
By now, most attorneys are aware that artificial intelligence (AI)–driven platforms can serve as powerful tools in the legal setting. Yet, many are hesitant to take advantage of these resources for fear of unwittingly committing an ethics violation. Yet, if we approach AI tools with caution and mindfulness awareness, we can maximize their positive potential while also avoiding legal ethics issues.
At the time of our article “ChatGPT: What Lawyers Need to Know Before Using AI,” published in the June 2023 issue of GPSolo eReport, only one legal ethics case had been brought to light. (You may find it helpful to read that article in conjunction with this one as it provides foundational material.) Since that article was published, a number of reported incidents have emerged, motivating us to revisit the topic in greater depth. In this article, we will provide a survey of new AI-related ethics matters, review related ethics opinions and guidelines issued by bar associations, and highlight AI developments in the judiciary.
In examining specific case studies, our goal is not to scare our readers away from ChatGPT or other generative AI tools. Instead, our intent is to show that ethical issues arise when legal professionals take shortcuts and fail to adhere to well-established rules of professional conduct, such as the duties of competence and diligence.
Case Law Developments
The following cases illustrate why attorneys who make an error should confront their mistakes early on and resist any temptation to cover them up. The act of concealment, rather than the initial mistake itself, can exacerbate the severity of the situation. If you find yourself in ethical hot water, our best advice is “tell the truth faster.”
Missteps by a Novice Lawyer
When drafting his very first motion to set aside a decision, Colorado Springs attorney Zachariah Crabill relied heavily on ChatGPT, which, unfortunately, made up imaginary cases (a phenomenon known as AI “hallucination”). Although he discovered the fabricated information prior to the hearing, he did not make disclosure to the court and failed to withdraw the motion.
When the judge questioned the validity of the cases, Crabill inaccurately blamed a legal intern. Six days later, he filed an affidavit acknowledging his use of ChatGPT in drafting the motion. Subsequently, the presiding judge referred the matter to the Colorado Office of Attorney Regulation Counsel, who suspended Crabill from practice for a period of one year and one day. He was required to serve only 90 days, however, with the stipulation that he complete a period of probation.
Perils of Unverified Legal Research
A novice attorney employed by the Dennis Block firm cited nonexistent case law in a brief filed in a matter pending before Los Angeles Superior Court Judge Ian Fusselman. After an opposing lawyer discovered the fake citations, the judge dismissed the matter and imposed a penalty of $999 against Block’s law firm. (Because the sanction was less than $1,000, the firm was not required to report the violation to the state bar.)
At the sanctions hearing, attorney John Greenwood appeared on behalf of the Block firm and testified, “I have to say there was a terrible failure in our office. There’s no excuse for it.” He further stated that the responsible attorney (who by then was no longer employed by the firm) did not check the “online research” on which she had relied. Perhaps the firm’s willingness to take responsibility worked in their favor, given the relatively light sanction imposed by Judge Fusselman.
Beyond Proofreading
In Smith vs. Farwell, et. al., a Massachusetts attorney was sanctioned $2,000 for submitting pleadings with fictitious cases generated by AI. The lawyer apologized to the court and admitted that while he had reviewed the documents for “style, accuracy and flow,” he had not verified the accuracy of the case citations. Despite the lawyer’s candor, apology, and acknowledgment of fault, the court found that Rule 11 sanctions were appropriate. While the $2,000 penalty is not small potatoes, it is certainly far less than what has been imposed on lawyers who were less forthcoming.
When Bad Faith Matters
New York lawyer David M. Schwartz faced possible sanctions for submitting a letter brief that cited nonexistent cases. As reported by the ABA Journal, Schwartz filed the brief in question in support of Michael Cohen’s motion for early termination of supervised release. Michael Cohen had “found” the cases through Google Bard and provided them to his counsel; however, nobody on the team apparently read them. Subsequently, a new member of Cohen’s legal team, unable to verify three citations, informed the judge about the issue.
After holding a sanctions hearing, Judge Jesse Furman concluded that sanctions were not appropriate because there was no finding of “bad faith.” The judge recognized in the opinion that “Rule 11 does not always require bad faith but it does where, as here, a court raises the prospect of sanctions on its own initiative.” It is likely that the judge was somewhat lenient because the firm self-reported the error upon its discovery.
Client Notification of AI Ethics Issues
The Second Circuit recently referred attorney Jae S. Lee to the court’s grievance panel for “further investigation” for her failure to confirm the validity of cases generated by ChatGPT. Furthermore, the court ordered her to supply a copy of the ruling to her client—translated into Korean, if necessary. Ouch! This issue could have been avoided if she had simply confirmed that the cited cases supported her position. Readers interested in seeing the full opinion are referred to Park v. Kim, No. 22-2057 (2nd Cir. 2024).
Pro Se Litigants
Even pro se litigants must ensure that the cases cited in their submissions are accurate. In Ex Parte Allen Michael Lee, No. 10-22-00281-CR (Tex. App. July 19, 2023), the court dismissed an appeal due to “inadequate briefing” by a pro se litigant. The court noted that Lee’s argument portion of his brief appeared to have been generated by AI as it contained three cases that didn’t exist.