chevron-down Created with Sketch Beta.

ARTICLE

Generative AI for Lawyers Part 1: Competence, Professionalism, and Risks

Jeanne M Huey

Summary

  • GAI is here and you need to learn how it affects a lawyer’s duty of competence and what risks are involved in using it.
  • The recent ABA Formal Opinion 512 addresses the key ABA Model Rules that come into play when lawyers use GAI in their practice and gives specific guidelines as to how to remain compliant while doing so.
  • This first article in a series focuses on the basics of remaining competent in the technology lawyers use—including GAI.
Generative AI for Lawyers Part 1: Competence, Professionalism, and Risks
Tirachard via Getty Images

In today's fast-paced legal world, lawyers face a critical challenge: how to maintain competence amidst rapidly evolving technology—specifically—generative AI (GAI) tools. The American Bar Association's Formal Ethics Opinion 512 sheds light on this important issue, guiding us on the path to staying ethical while navigating new technologies that help us to better serve our clients.

Competence for lawyers in 2024 involves more than just knowledge of the law. Under ABA Model Rule 1.1, lawyers must provide competent representation, which includes having the legal knowledge, skill, thoroughness, and preparation reasonably necessary for their work. Under Comment 8 to Rule 1.1, this also encompasses an understanding of the technology that lawyers use in their practice.

Importantly, competence in the tech that lawyers use doesn’t require every lawyer to become a tech expert or AI specialist. As Opinion 512 reminds us, however, it is not enough to simply hire someone else who does know about the risks and benefits of that technology. Lawyers must have a “reasonable understanding” of the capabilities and limitations of the specific technology that they use. That includes GAI. Lawyers can meet this standard either by acquiring a reasonable understanding of the benefits and risks of the GAI tools on their own, or they can draw on the expertise of others who can provide them with guidance about the relevant technology’s capabilities and limitations. Put another way, remaining ignorant about the technology used in their law practice—such as GAI—is not an option. And, of course, this isn’t a one-time task, because technology, particularly GAI, is evolving rapidly and staying competent means keeping pace with these advancements.

Understanding the Risks of Generative AI—What Does GAI Have to Say about It?

When asked about the number one risk to lawyers who use GAI for legal work, the GAI program I use (ChatGPT) told me:

The number one risk associated with lawyers using ChatGPT is providing inaccurate or misleading legal advice, often due to the AI's limitations in understanding legal context and nuances.

[Emphasis added.] We have all heard about lawyers who blindly took flawed or completely unfounded legal analysis or “pretend” caselaw generated by GAI and plugged it into a pleading and filed it with the court. One would think by now that all lawyers would understand that GAI is not a substitute for actual legal work and analysis—the work that lawyers are trained in and get paid to provide to their clients. Nonetheless, at least a few attorneys have recently been referred to their local disciplinary authority for citing a nonexistent case generated by ChatGPT in a legal brief, which the court found violated Federal Rule of Civil Procedure 11 and amounted to the submission of false statement to the court. It also likely violated the attorneys’ duty of candor to the tribunal under ABA Model Rule 3.3 or its equivalent.

The GAI program then went on to remind me:

AI tools can generate convincing-sounding responses that might seem factually or legally correct but can include factual inaccuracies, outdated information, or what’s known as “AI hallucinations” — where the AI confidently produces false or fabricated information.

This statement is true—and is part of the allure of using GAI. The “convincing-sounding” responses are so tempting to simply cut and paste. Consider, however, that soon—if it has not already happened—everyone in the legal system (i.e. judges, lawyers, paraprofessionals, clients, and professors) will be able to recognize the difference between GAI-generated text and actual legal analysis and argument written by a skilled, smart, inciteful, and intelligent lawyer. When that happens, anyone who chooses to continue to use lightly edited GAI text even in everyday correspondence will lose credibility with their colleagues and the courts in which they practice.

Competence in the AI Age

So, what does competence in this AI-driven era require? First and foremost, it involves independent verification of AI output. Simply copy-pasting what the tool produces and sending it off to a client or court is not enough. And trying to mask your AI-generated text by running it through a program that “humanizes” it is not a solution. Try it once and you will see why. There are no shortcuts when it comes to the actual practice of law. Lawyers must apply their legal knowledge and judgment to review the AI’s work and modify it accordingly.

The degree of verification needed will depend on the task. But never forget that clients usually face complex, emotional, and high-stakes situations, and they rely on lawyers for more than just legal knowledge—they need understanding, legal guidance, strategic thinking, and human empathy. And they deserve (and lawyers get paid for) far more than just the output of a machine. No amount of technology can substitute for the tailored advice that comes from years of training, real-world experience, and the trust built through personal relationships. As technology reshapes how lawyers work, understanding how to navigate these changes ethically and responsibly is vital for keeping clients happy, maintaining competence, and complying with our rules of professional conduct.

    Author