For the moment, there is reason to be optimistic about the continued place for human lawyers in our legal system. On May 4, in a personal injury case called Mata v. Avianca, Inc. in the Southern District of New York, the judge issued an order to show cause to the plaintiff’s attorney as to why he should not be sanctioned. The attorney had filed an opposition brief to a motion to dismiss that cited to a series of fake cases favorable to his client. The plaintiff’s lawyer even provided the court with a partial copy of a ruling from the Eleventh Circuit that was cited in the brief. This precipitated a bizarre turn of events in which the Southern District had to contact the Eleventh Circuit to affirmatively determine that the case was fake, after being unable to find any reference to the case on Westlaw and Lexis.
I’ll cut to the chase here: One of the plaintiff’s lawyers conducted research on ChatGPT, even going so far as to ask ChatGPT if the cases it was citing were fake, and received the response that the cases were “real and can be found in reputable legal databases such as LexisNexis and Westlaw.” This is what we now call a hallucination, where the AI chatbot made up something that sounded convincing. I must admit I reviewed the opposition brief and found it to be surprisingly credible.
I’m not the first to point out that the lesson of the Mata case is that the utility of these tools for the legal profession has been overestimated and that ChatGPT and similar products are not going to replace you at your law firm anytime soon. But let me offer a sightly nuanced answer. The problems of accuracy with ChatGPT and other large language models are well- documented. I would never rely on one of these products in my practice without checking every word for accuracy. Even Allen & Overy, which has been piloting a specialized AI-driven legal tool called Harvey since November 2022, concedes that Harvey’s answers need to be reviewed by a lawyer. But most of the proponents of the use of AI in law talk about how it will bring additional efficiencies to a lawyer’s practice, something likely to lead to a small reduction of head count in the short term. And there is the open question of how the legal industry will be affected if and (more likely) when an AI vendor manages to create a highly accurate generative AI tool aimed at lawyering. Certainly, Allen & Overy is now pursuing that with Harvey.
On the other hand, many litigators are likely to experience a boon of work related to AI—ranging from copyright infringement litigation to disparate impact lawsuits under Title VII. For example, in my field, we are watching litigation against Workday, Inc., a company that offers human resource information systems. The suit, which was filed in the Northern District of California earlier this year, alleges that the company’s AI hiring software is discriminatory. Moreover, AI-focused statutes and regulations at the federal, state, and local levels are coming, bringing more work for litigators, at least for now. Though legislators have been proposing AI bills for several years to no avail, Congress finally seems to be paying close attention to the issue collectively, as is evident in the recent Senate hearing featuring Sam Altman, the chief executive officer of OpenAI, and Christina Montgomery, the chief privacy and trust officer of IBM. In addition, several states are considering legislation to regulate the use of AI. Beginning July 5, New York City is enforcing its Local Law 144, which will regulate the use of some AI technology in hiring and promotions.
I think the effect of AI on the sheer number of lawyers gainfully employed, at least in the near future, will be a bit of a wash. I don’t worry about remaining relevant today or tomorrow or even next year. But I will be paying close attention to this technology over the next decade to determine how I can continue to add value for my clients and remain vital. I would advise others to do the same.