chevron-down Created with Sketch Beta.

Law Practice Today

December 2024

Generative AI Competence

Daniel J Siegel

Summary

  • Competence in the legal profession is critical, particularly in the context of using generative AI.
  • Mistakes by attorneys often stem from a lack of understanding and competence rather than the need for new ethical rules.
  • Maintaining competence is crucial for lawyers to avoid ethical pitfalls and ensure credibility with clients, opposing counsel, and the courts.
Generative AI Competence
iStock.com/piranka

Jump to:

As someone who has advocated for amendments to the Model Rules of Professional Conduct to include technology-related rules and comments, I don’t believe that new rules are the answer when it comes to generative artificial intelligence (AI). In fact, once you bring generative AI into the discussion, it almost makes me wonder whether the Model Rules need more language or whether it’s the attorneys who just have to pay more attention.

Look at the attorneys who have made mistakes with generative AI. They are attorneys who didn’t read the information they received when they used generative AI and submitted briefs, memos, and other documents that were gibberish or unprofessional. Take, for example, Stephen A. Schwartz, a New York lawyer who went to ChatGPT and asked it to write him a brief in response to a motion to dismiss filed against his client. He was writing the answer to the motion on a subject that he really didn’t know much about, because he took a case that he really didn’t know much about, in a practice area he didn’t know much about. As a result, ChatGPT, which also didn’t know much about the area of law, because it is not a lawyer’s website or a research site, came up with a brief with authorities and quotes. It looked good, he thought. But anything with authorities and quotes can look good, even if, as Schwartz later found out, the chatbot completely made up all of the quotes and all of the citations. Then there were the judges who were listed as having written the decisions. They were real judges. But they weren’t real cases. ChatGPT made them up.

When Schwartz submitted his brief to the court and opposing counsel, he was in for a rude awakening. The other attorneys, who weren’t dabbling and regularly practiced in that area of law, knew the area of law. It didn’t take long for them to realize that Schwartz had cited cases that didn’t exist. They immediately brought this information to the attention of the judge. The judge, when confronted with this, asked Schwartz to verify the accuracy of his quotes. He did this by asking ChatGPT if the cases were real. Not a good idea. ChatGPT told him that all of the quotes were accurate. Of course, they weren’t.

When we talk about ethical rules, we talk about all the different permutations of ethical issues that come up. You can fit most of the ethical quandaries that lawyers get into in a few basic Rules of Professional Conduct and go from there. In the end, though, most lawyers who make mistakes either make mistakes because they simply made a mistake, as many of us do, or, as Schwartz did, make mistakes because they take cases that they’re not qualified to take and then write briefs that they don’t know anything about and suddenly find themselves in trouble. In those cases, it comes down not to a new Rule of Professional Conduct but to a narrower issue: competence. It’s all about competence.

Model Rule 1.1, the competence rule, provides that lawyers must be competent. I am always amazed that lawyers are supposed to take continuing legal education courses, but they don’t have to take them in the areas in which they practice or may practice. If a criminal law symposium happens to be on the schedule and it’s convenient for someone to go, even if they don’t ever handle a criminal case, they could get a day’s worth of credits just for showing up and reading the paper.

Lawyers need to be competent. Competence requires a basic level of expertise. It doesn’t mean you have to be the best; it doesn’t mean you have to be superb, but you at least have to know what you’re doing.

Recently, I was presenting a seminar on generative AI, and I went to ChatGPT and asked it, “What is a Yellow Freight motion in the context of workers’ compensation law in Pennsylvania?” It replied that it was probably an area of law that involved employees of Yellow Freight, and they were trying to get a result of some kind. That sounds like what my children might have made up when they were younger and didn’t know any better. But lawyers who practice in that area would have laughed at someone who gave that answer, just as the judge to whom they would have given that answer would have really laughed. Then, after that, the lawyer would never have a shred of credibility in that courtroom again.

Credibility is the most important thing you can have as a lawyer. You need it for your clients, you need it for opposing counsel, and most of all, you need it for courts, judges, and other entities that you have appeared before. Once you lose your credibility, you have nothing.

In the end, it all comes down to Model Rule 1.1: competence. After all, if you’re not competent, you’re not taking the right courses, you’re not writing the right briefs, you’re not consulting the right resources, and you’re just being lazy. When you’re lazy, the results show.

It’s competence!

    Author