chevron-down Created with Sketch Beta.
August 10, 2023 Column

Future SciTech Leaders: Law Student Engagement Committee News

David Husband and Stacey Zumo

Generative AI is all the rage these days, with proponents promising it will revolutionize the way we work and skeptics decrying the threat it poses to society, with some even claiming it may be an “extinction event” on par with climate change. Closer to home for the legal profession is what happens when an attorney inappropriately relies upon generative AI. While technology is always promising and exciting, a proper appraisal of technology’s risks and benefits requires considering cases where human reliance on the technology leads to negative outcomes. Below, Stacey Zumo, for a member of the Law Student Engagement Committee, discusses the first case of a lawyer being sanctioned for their improper reliance on generative AI output in litigation.
—David Husband, Chair of LSEC


A New York attorney may be facing sanctions for relying on ChatGPT for legal research for a personal injury lawsuit. The attorney, who has been practicing for 30 years, thought it was perfectly safe to use the generated text, which included citations to specific case law and even summaries and quotations from those cases. However, little did he know, ChatGPT completely fabricated six cases his legal argument relied upon. After neither the judge nor opposing counsel could independently locate the cited cases, the judge ordered him to submit copies of the cases. The attorney informed the court he mistakenly relied on ChatGPT’s assurances that the cases were real and can be found on “reputable legal databases such as LexisNexis and Westlaw.” During his sanctions hearing on June 8, 2023, the lawyer confessed, “I did not comprehend that ChatGPT could fabricate cases.”

ChatGPT is a type of artificial intelligence known as generative AI. Generative AI, while offering potential promise, is prone to “hallucinating” or making up and fabricating facts (such as legal citations) out of thin air. Other areas of concern include the accuracy and timeliness of data, data privacy, bias, transparency, and intellectual property, among others. Some AI models are even subject to a “knowledge cutoff,” meaning they cannot speak to questions after a certain date (such as September 2021 in the case of ChatGPT).

While there are tremendous potential applications of AI for lawyers, including increasing efficiency, it is especially important that new lawyers and law students are aware of the risks before employing the help of such technology in their research and writing. It may be helpful for simple tasks, such as form letters or summarizing cases, or even as a starting point to start spotting legal issues or relevant statutes; but it should not be relied upon to conduct research or write legal opinions or briefs, without stringent vetting of the generative AI output.

This sanctions case underscores the risks of excessive dependence on generative AI in the legal profession. Lawyers must exercise caution, think critically, conduct comprehensive research, and consider both the risks and benefits of utilizing generative AI in their practice or studies.

    The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

    Stacey Zumo


    Stacey Zumo works as an honors attorney at the FDIC, where she is involved in diverse areas, including administrative, banking, bankruptcy, privacy, and information law, as well as corporate governance and policy work. Previously, she clerked for a state judge in North Idaho. She graduated magna cum laude from California Western School of Law in 2021. The views expressed by Stacey are hers alone and are not representative of those of the FDIC.