chevron-down Created with Sketch Beta.
September 25, 2023 Generative AI

Lawyers must take responsibility for “supervising” AI

Trusting a generative artificial intelligence program like ChatGPT to write legal briefs is like trusting a young law firm associate. Both need close supervision by more senior lawyers.

An ABA webinar explored the ethics of Generative AI, artificial intelligence that creates content like text, images or music using machine learning.

An ABA webinar explored the ethics of Generative AI, artificial intelligence that creates content like text, images or music using machine learning.

That was the consensus of four panelists — two judges and two lawyers — at a webinar Sept. 21 called “Uses and Abuses of Generative AI and the Ethics of Its Use by Attorneys and Judges.” The program was co-sponsored by the ABA Judicial Division and Thomson Reuters.

“The supervision thing is, in my mind, kind of the key component to this,” said Jin-Ho King, a partner with the law firm Milligan Rona Duran & King in Boston. “If you are working in a firm as a supervising attorney, you have to supervise. It is not just what you should do, it is ethically what you are required to do.”

The panel discussed two recent cases — one in New York (Mata v. Avianca), one in Texas (Ex Parte Allen Michael Lee) — in which generative AI programs wrote briefs that included fake case citations. In New York, the lawyers were fined $5,000.

Generative AI, or GAI, is the type of artificial intelligence that creates content like text, images or music autonomously, using machine learning. The problem is that GAI programs like ChatGPT emphasize proper grammar and structure, but accuracy of information “is not necessarily the highest priority,” King said.

The panel reviewed orders from three judges who have imposed new responsibilities on lawyers who use GAI. For example, U.S. District Judge Brantley Starr of Dallas requires all lawyers and pro se litigants to file a certificate attesting that no part of any filing was written by GAI, or that any language drafted by GAI was checked for accuracy by a human.

That’s a reasonable requirement, said panelist Kimberly Kim, an administrative law judge with the California Public Utilities Commission in San Francisco. Filings written by GAI “are going to get more and more refined and more humanlike,” and eventually it will be hard to tell whether they were written by AI or humans, Kim said. “So, to create this external burden is a good idea.”

Another judge, Stephen Alexander Vaden of the U.S. Court of International Trade in New York, requires that any filing that includes text written by GAI must have a disclosure that identifies the program used, the portions of text drafted by GAI and certification that there was no disclosure of confidential or proprietary information to an unauthorized party.

A third judge, Michael Baylson of the U.S. District Court in Philadelphia, requires that lawyers and pro se litigants who use AI to prepare written filings must disclose that AI has been used and certify that every citation has been verified as accurate.

Enforcement will be crucial, said moderator Stephanie Domitrovich, a senior state trial judge with the Sixth Judicial District in Erie, Pennsylvania.

“Obviously an attorney who puts themselves on the line may have to face contempt charges and disciplinary charges based on it,” Domitrovich said. “I think there are ways to enforce it or put the fear of sanctions in their heart and their mind.” That will be more difficult for pro se litigants, she added, “but with attorneys who have licenses, yes, I think it can be enforced.”

Even the most traditional lawyers and judges must learn more about GAI, its benefits and its potential perils, said Nicole Lemire-Garlic, a faculty member at the University of Nevada, Reno.

“Having worked with a lot of different judges over the years, I know there can be a tendency to want to put our heads in the sand and think of yesteryear and how we used old technologies and find comfort in that,” Lemire-Garlic said. “But I think there is so much changing right now that it is really important to develop AI literacy. You may not choose to use it yourself, but you have to understand enough so you can tackle these issues and have a voice in terms of how they are governed in the future.”

Related links: