Attempts to Hide Mistake Leads to Unprecedented Sanctions
In reply, the airline stated it was unable to find most of the cases cited in plaintiff’s opposition. The court conducted its own search and was also unable to find many of the cited authorities. The court asked the attorney-of-record to file an affidavit with copies of the seemingly nonexistent cases. In response, the attorney-of-record requested another extension to reply to the court by falsely claiming he was on vacation. Subsequently, the attorney-of-record executed an affidavit written by the drafting attorney with purported excerpts of all but one of the requested cases.
After further inquiry from the court, the plaintiff’s attorneys ultimately revealed that they relied on legal opinions provided by ChatGPT. The drafting attorney stated he had never used the technology before, did not realize that ChatGPT could create false content, and used ChatGPT because the firm had limited access to federal cases. The attorney-of-record did not review the legal pleadings before filing.
Subsequently, the court held an order to show cause hearing, fined the law firm and attorneys $5,000, and required them to apologize to each judge falsely identified as an author of the fake opinions cited in the opposition. The court explained that the attorneys violated Federal Rule of Civil Procedure 11 when they acted with subjective bad faith, the “knowing and intentional submission of a false statement of fact.” For example, the drafting attorney first stated that he only used ChatGPT to supplement his research but later admitted at the hearing that it was the only tool he used. The court explained that the attorney-of-record violated Rule 11 by not reading any of the cases cited in the opposition and lying about being on vacation when the court asked for additional information. The court concluded that while the use of technology is commonplace, “existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
Use Caution when Using Artificial Intelligence (AI) for Legal Research
Litigation Section leaders observe that this case is a harsh reminder that lawyers are ultimately responsible for the content of their court filings. “Oversight and manual review of the technology’s output is critical in strategic use of AI in litigation,” states Tiffany Williams Brewer, Washington, DC, Vice Chair of the Litigation Section. “When a lawyer is relying on computer generated research, the lawyer is still responsible for their own research,” agrees Michael D. Steger, New York, NY, cochair of the Section’s Solo & Small Firm Committee.
A lawyer’s ethical responsibilities do not end with the use of technology. “The use of AI in practice always implicates a lawyer’s fundamental dual ethical duties to provide competent and diligent representation to their clients, under their state’s corollary to the ABA Model Rules of Professional Conduct 1.1 (Competence) and 1.3 (Diligence),” observes Williams Brewer.
As in this case, the use of AI “can also implicate the duty of candor that lawyers have as officers of the court in communicating with the court under Model Rule 3.3 (Candor Toward the Tribunal),” continues Williams Brewer.
In response to the increased use of AI in the legal practice, several courts have issued standing orders that range from the disclosure of the use of programs like ChatGPT to a reminder that lawyers are responsible for providing accurate statements when using AI. “One of the benefits of artificial intelligence is that it can provide some initial research to help guide the attorney on where to look next,” counsels Steger. “But, at this point, AI is not a panacea for legal research and is not going to provide a shortcut that the attorney can use to complete all of their research,” he warns.