chevron-down Created with Sketch Beta.

Voice of Experience

Voice of Experience: November 2023 | Thanksgiving

Adventures in the Law: In Our Own Image

Norm Tabler

Adventures in the Law: In Our Own Image

Jump to:

Recent news stories highlight the remarkable extent to which artificial intelligence (AI) and chatbot technology can already assist in the practice of law. If progress continues at the current rate, chatbots may soon be able to function just like real lawyers. That would be both good news and bad news.

A team of University of Minnesota law professors has demonstrated that a chatbot, ChatGPT (Generative Pre-trained Transformer), can pass exams from actual law school courses. The chatbot answered essay and multiple-choice questions in final exams in constitutional law, employee benefits, tax, and torts.

The chatbot passed all four finals, averaging a C+. Performance at that level throughout law school would keep a student on probation but would be sufficient to earn a J.D. See Choi, “Chatbot Goes to Law School,” J. of Legal Education, 2023.

What about actual practice? You might ask New York lawyer Steven Schwartz. He relied on ChatGPT in a personal injury action against Avianca airline. Mata v. Avianca, S.D.N.Y.

When Avianca moved to dismiss, Steve responded with an impressive 10-page brief citing over half a dozen decisions on point, including Martinez v. DeltaZicherman v. Korean Air, and Varghese v. China Southern Airlines.

So far, so good. ChatGPT can pass law school exams well enough to earn a J.D., and it can conduct legal research and find helpful legal precedents in a real lawsuit.

Now for the bad news. It turns out that chatbots share some of the less attractive characteristics of real lawyers. Maybe we should have suspected as much, knowing that in law school ChatGPT was content to coast along on probation, with a gentleman’s C average.

It turns out that in law practice when chatbots can’t find helpful authorities, they may be tempted to invent them. How do we know that? Because when the opposing lawyers tried to look up the decisions cited in Steve’s brief, they couldn’t find them. And no wonder. The decisions don’t exist. ChatGPT made them up.

Judge Kevin Castel asked Steve why he had submitted a brief chock-full of nonexistent precedents. Steve’s response will not sound unfamiliar to experienced lawyers and judges. He shifted the blame elsewhere, namely to ChatGPT. He lamented, in effect, I relied on ChatGPT, and alas, ChatGPT let me down.

But surely, Judge Castel persisted, Steve hadn’t simply taken the chatbot’s word at face value, had he? Certainly not, Kevin responded. He had asked the chatbot if the cases were real, and the chatbot had answered yes. You read correctly: Steve knew he could believe the chatbot because the chatbot said so.   

Asking the chatbot whether the cases were real may sound like a pointless, even foolish, exercise. But wasn’t Steve treating the chatbot precisely as he would treat a lawyer who had brought him the disputed cases? He would ask if the lawyer had read the cases to confirm that they supported the propositions they were cited for.

And if that lawyer had made them up, he would likely respond just as the chatbot did. He wouldn’t confess, No, I made it all upPlease forgive me! He’d stick to his story and give Steve the reassurance he sought.

No one doubts that the usefulness of AI in law practice can only increase. If a chatbot can earn a C+ in law school in 2023, surely its performance will only improve over time. If it can already conduct legal research—albeit flawed research--surely its research skills will improve.

But what about the less attractive features of lawyers? Will AI take those on and hone them over time? Already, one chatbot has been sued in California for practicing law without a license. Faridian v. DoNotPay, Calif. Super. And if a chatbot can now fabricate half a dozen precedents in a case like Steve’s, will it someday fabricate a dozen? Two dozen?

Will it proceed to inventing regulations? Statutes? Contracts? Supreme Court opinions? Constitutional provisions?

And what about the personal characteristics of lawyers? We know from Steve’s experience that a chatbot will fabricate cases and then lie about it. Will chatbots also learn to flatter and curry favor with firm leaders in hopes of advancement? Undercut other chatbots and associates to make themselves look better? Brag and tell boring war stories at cocktail parties?

On the business side of law practice, will chatbots pad their hours, exploit expense accounts, disclose confidential client information, represent conflicting client interests?

Chatbots can’t chase ambulances, at least not physically. But what’s to stop them from purchasing endless hours of obnoxious TV ads soliciting personal injury clients?

If the law profession wants to know the full potential of AI, for good and bad alike, maybe we should look in a mirror.