Lawyers can use artificial intelligence (AI) effectively to improve and ultimately elevate their practices in many ways. As AI stands now, its main strengths lie in automating repetitive tasks and managing large amounts of information. AI can drastically reduce the time law firms spend on e-discovery reviews because, once a model knows what to look for, the predictive coding algorithms can pre-sort and pre-code large productions. AI-driven practice management software can integrate with e-filing systems and effectively automate a tedious and repetitive—yet completely necessary—part of the practice of law. In fact, a 2023 Goldman Sachs study by Joseph Briggs and Devesh Kodnani found that 44 percent of law firm tasks are ripe for automation.
AI has the potential to change attorneys’ lives for the better by solving communication problems. A trial attorney from Georgia named Lori Cohen was struck with a sudden loss of her ability to speak, but she can still argue in front of juries using an incredible AI tool. Cohen discovered an AI voice-cloning tool by a company called ElevenLabs. This tool, which Cohen named Lola, uses recordings of Cohen’s own voice. The AI analysis of Cohen’s voice allows Lola to recreate her speaking pitch, speed, and even her accent. Because Lola is “contextually aware” of what Cohen is speaking about, Cohen can express emotion and connect with jurors.
However, with any new tool, there comes the danger of misunderstanding its benefits and pitfalls. No one wants to be left behind in the AI arms race, but attorneys need to be aware of AI’s limits and when to take the grand claims from AI companies with a grain of salt.
This article is in no way meant to dissuade anyone, attorney or otherwise, from finding new ways to leverage AI to improve the practice of law. Rather, this article is meant to highlight areas where AI should be more closely scrutinized so that the legal profession can implement AI in a responsible way.
Hallucination
All attorneys thinking about implementing AI in their practices should be concerned about hallucination in AI. We have all read multiple stories of lawyers and judges submitting briefs containing completely fictitious case law. In response, many legal tech startups and providers are implementing retrieval-augmented generation (RAG). Like ChatGPT, RAG systems operate as large language models (LLMs), facilitating more natural language search than memorizing specific terms and symbols. But where ChatGPT draws from the entire Internet, RAG models get their information only from a closed set of data, such as Westlaw, Bloomberg, or the lawyer’s own collection of documents, allowing RAG systems to provide accurate citations when answering user prompts. This represents a significant improvement. Some providers claim their RAG tools are “hallucination free.”
However, we as a profession need to be certain these claims are all they promise. Researchers at the Stanford RegLab and Institute for Human-Centered Artificial Intelligence (HAI) checked Lexis+ AI and Westlaw AI-Assisted Research for their error rates. The researchers found that, while these legal AI tools significantly reduce errors compared to using more general-use LLMs, the RAGs still hallucinate and produce false information at an alarming rate: “the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.” These errors included incorrect responses by the AI, where the responses were either just wrong or false, as well as errors where the responses were incomplete in some way. In some cases, “a response might be misgrounded—the AI tool describes the law correctly, but cites a source which does not in fact support its claims.”
The researchers highlight the need for rigorous and transparent benchmark testing of legal AI tools, which is not being done. Given the current landscape of AI competition, this might serve the best interests of technology vendors, but it puts attorneys in danger of trusting the wrong information. While we should not dismiss the usefulness of AI tools, we would be equally irresponsible to trust them blindly, especially when companies promote RAG as the ultimate solution to hallucination. The Stanford researchers call for more transparency when it comes to these AI tools so lawyers can comply with rules of ethics and professional responsibility.
Ultimately, attorneys should always remember that nothing absolves them of their own responsibility to check the accuracy of whatever legal arguments they submit to a court or jury.