chevron-down Created with Sketch Beta.

GPSolo eReport

GPSolo eReport November 2024

AI and You: When Lawyers Should Think Twice about Using AI

Kevin James Doran

Summary

  • This article highlights areas where artificial intelligence should be more closely scrutinized so that the legal profession can implement AI in a responsible way.
  • Some providers claim their retrieval-augmented generation (RAG) tools are hallucination free, but a recent study provides evidence to the contrary.
  • Using AI-generated images poses the risk of professional embarrassment—either because you present something inaccurate or because you present something that just looks silly.
AI and You: When Lawyers Should Think Twice about Using AI
gahsoon via Getty Images

Jump to:

Lawyers can use artificial intelligence (AI) effectively to improve and ultimately elevate their practices in many ways. As AI stands now, its main strengths lie in automating repetitive tasks and managing large amounts of information. AI can drastically reduce the time law firms spend on e-discovery reviews because, once a model knows what to look for, the predictive coding algorithms can pre-sort and pre-code large productions. AI-driven practice management software can integrate with e-filing systems and effectively automate a tedious and repetitive—yet completely necessary—part of the practice of law. In fact, a 2023 Goldman Sachs study by Joseph Briggs and Devesh Kodnani found that 44 percent of law firm tasks are ripe for automation.

AI has the potential to change attorneys’ lives for the better by solving communication problems. A trial attorney from Georgia named Lori Cohen was struck with a sudden loss of her ability to speak, but she can still argue in front of juries using an incredible AI tool. Cohen discovered an AI voice-cloning tool by a company called ElevenLabs. This tool, which Cohen named Lola, uses recordings of Cohen’s own voice. The AI analysis of Cohen’s voice allows Lola to recreate her speaking pitch, speed, and even her accent. Because Lola is “contextually aware” of what Cohen is speaking about, Cohen can express emotion and connect with jurors.

However, with any new tool, there comes the danger of misunderstanding its benefits and pitfalls. No one wants to be left behind in the AI arms race, but attorneys need to be aware of AI’s limits and when to take the grand claims from AI companies with a grain of salt.

This article is in no way meant to dissuade anyone, attorney or otherwise, from finding new ways to leverage AI to improve the practice of law. Rather, this article is meant to highlight areas where AI should be more closely scrutinized so that the legal profession can implement AI in a responsible way.

Hallucination

All attorneys thinking about implementing AI in their practices should be concerned about hallucination in AI. We have all read multiple stories of lawyers and judges submitting briefs containing completely fictitious case law. In response, many legal tech startups and providers are implementing retrieval-augmented generation (RAG). Like ChatGPT, RAG systems operate as large language models (LLMs), facilitating more natural language search than memorizing specific terms and symbols. But where ChatGPT draws from the entire Internet, RAG models get their information only from a closed set of data, such as Westlaw, Bloomberg, or the lawyer’s own collection of documents, allowing RAG systems to provide accurate citations when answering user prompts. This represents a significant improvement. Some providers claim their RAG tools are “hallucination free.”

However, we as a profession need to be certain these claims are all they promise. Researchers at the Stanford RegLab and Institute for Human-Centered Artificial Intelligence (HAI) checked Lexis+ AI and Westlaw AI-Assisted Research for their error rates. The researchers found that, while these legal AI tools significantly reduce errors compared to using more general-use LLMs, the RAGs still hallucinate and produce false information at an alarming rate: “the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.” These errors included incorrect responses by the AI, where the responses were either just wrong or false, as well as errors where the responses were incomplete in some way. In some cases, “a response might be misgrounded—the AI tool describes the law correctly, but cites a source which does not in fact support its claims.”

The researchers highlight the need for rigorous and transparent benchmark testing of legal AI tools, which is not being done. Given the current landscape of AI competition, this might serve the best interests of technology vendors, but it puts attorneys in danger of trusting the wrong information. While we should not dismiss the usefulness of AI tools, we would be equally irresponsible to trust them blindly, especially when companies promote RAG as the ultimate solution to hallucination. The Stanford researchers call for more transparency when it comes to these AI tools so lawyers can comply with rules of ethics and professional responsibility.

Ultimately, attorneys should always remember that nothing absolves them of their own responsibility to check the accuracy of whatever legal arguments they submit to a court or jury.

Image Generation

Another misstep when using AI in legal proceedings involves the use of AI image generation when it comes to the creation of demonstrative exhibits. Image generators such as Stable Diffusion, Midjourney, and DALL-E 2 can instantly create impressive and complex images using only word prompts. For the legal profession, the use of AI in this context might seem like a much less serious issue than hallucination because demonstrative exhibits are illustrative rather than substantive.

Why, then, should we care? Aside from the minefield of copyright concerns and ethical issues arising from how AI-generated images are created, using such images on your website or in your own demonstrative exhibits creates a risk of professional embarrassment—either because you present something inaccurate that opposing counsel can call you out on (in front of the judge or jury) or because you, to put it frankly, presented something that looks silly and embarrassing. Many AI-generated images just do not look good, whether because of the weird smoothness of what is supposed to be a photograph of a person or, most notoriously, the inability to generate hands that don’t look like a Lovecraftian fever dream. None of this belongs in a presentation that an attorney should show to a jury in a court of law, especially given the availability of actual artists who can take your thoughts, themes, and story and craft helpful illustrations that will truly augment your story instead of distracting the juries with strange, cheap-looking pictures.

Similar concerns apply to tools that use generative AI (GenAI) to create slide decks packed with all the visuals and formatting you need to get started on your own slideshow. While these tools are useful for getting your project started, you should not rely on them to make the entire slide deck.

The Human Touch

The practice of law, especially trial advocacy, is so much more than the recall of relevant data. On October 15, 2024, at a South by Southwest (SXSW) event in Sydney, Australia, a group of lawyers and researchers will attempt a full mock trial where a human lawyer will be pitted against a random audience member who will “represent themselves” while using an AI-assistant developed by the company NexLaw. While it would be hard to fully replace an attorney with a robot, this experiment shows that the thought is crossing some minds. The question we should be asking ourselves is not what makes an AI tool special but, instead, what makes me, the human lawyer, special? What do I bring to the table when a person or business can just buy a temporary license to an AI service that feeds them all the law?

In 2001, Brian J. Foley and Ruth Anne Robbins, professors at Rutgers Law School, wrote, “To learn how to win cases, lawyers need to learn about persuading people, an art that has always dealt more with emotion than with reason.” The practice of law is an art. Every case, no matter how small or how similar to another, has its own set of facts, its own people involved, and its own place in our wider culture.

The trust we have in AI can often be misplaced and lead us to make errors. As lawyers, we have a responsibility to our clients, to our colleagues, and to society to do our due diligence regarding this rapidly evolving technology.

    Author