chevron-down Created with Sketch Beta.

GPSolo Magazine

GPSolo May/June 2023: Public Service: How Lawyers Can Help the Community

Hey, ChatGPT, Know Anything about Pro Bono Legal Services?

James Ellis Arden


  • As the technologies we use, such as AI chatbots, grow more complex, so, too, do our ethical risks. New technologies exacerbate privacy risks because we use them without knowing all the ways our information is being used.
  • That the apps, programs, and software we use are stealing our personal information from us is one thing. It is quite another for the apps to obtain our clients’ confidential information.
  • Artificial Technology offers innovative ways to help provide pro bono legal services, including through the use of virtual reality, collaborative decision-making, knowledge sharing, and augmented reality.
Hey, ChatGPT, Know Anything about Pro Bono Legal Services?
JulPo via Getty Images

Jump to:

You’ve probably heard by now that ChatGPT, an artificial intelligence (AI) chatbot, was able to pass the Uniform Bar Examination. (Steve Lash, ChatGPT Passes Bar Exam, Md. Daily Rec. (Mar. 20, 2023).) Given its obvious legal acumen, I decided to ask ChatGPT how artificial intelligence could help lawyers to provide pro bono services.

First, Ethics Are Ethics

Before getting to ChatGPT’s response to my query, let me address a misconception about pro bono service under which a few attorneys may still labor: Our ethical duties are the same whether we are working for fee or for free.

Consider attorney Lloyd Martin Segal, who was retained in May 1982 and was paid $275 to incorporate West Coast Ensemble as a nonprofit corporation; he was retained separately in January 1983 and was paid $300 to incorporate the Community Meditation Center of Los Angeles as a nonprofit religious organization. Segal, unfortunately, ignored both of those clients for so long that each complained to the California State Bar. (Segal v. State Bar, 44 Cal.3d 1077, 1082–83 (1988).)

Seeking dismissal of the state bar complaints, Segal argued that he had taken both matters on pro bono and that the $275 and $300 payments to him were not attorney fees but advances for costs. He stated he “did not have the time to handle the pro bono matters in an expedited fashion” because he was spending most of his time on other business. (Id. at 1084.)

The court didn’t buy it and suspended him: “Segal’s argument presupposes that pro bono clients deserve less diligent service than paying clients, a proposition that undermines the integrity of the legal profession. An attorney’s standard of professional conduct to a pro bono client should be no different from his or her responsibility to any other client.” (Id.)

A lawyer should aspire to render at least 50 hours of pro bono services per year, per ABA Model Rule of Professional Conduct 6.1. Conflicts of interest with current or former clients must still be avoided; however, Model Rule 6.5 permits short-term limited legal services to be provided through a nonprofit without concern for potential conflicts. (You’ll never take on an actual conflict, right?)

ChatGPT Can Gather Good Information about Pro Bono Legal Services

Back to ChatGPT. It responded to my query about how to help provide pro bono services with (1) legal research, (2) document review, (3) chatbots (providing basic legal advice and helping clients understand their rights), (4) predictive analytics (to help pro bono lawyers provide better advice and make more informed decisions about which cases to take on), and (5) translation services.

Meh. So, I asked, “What are more innovative ways to use AI to provide pro bono legal services?” This generated a much more interesting response:

  1. Virtual reality. “AI-powered virtual reality technology could be used to create immersive simulations of legal proceedings, allowing clients to experience what it’s like to be in court or negotiate a settlement. This could help clients better understand their rights and options and feel more confident and prepared when dealing with legal issues.”
  2. Collaborative decision-making. “AI could be used to facilitate collaborative decision-making among lawyers and clients. For example, an AI-powered platform could enable clients and lawyers to jointly review legal documents, provide feedback, and make decisions in real time.”
  3. Knowledge sharing. “AI could be used to create knowledge-sharing platforms that allow pro bono lawyers to share legal expertise and best practices with each other. This could help build a stronger community of pro bono lawyers and improve the quality and consistency of pro bono legal services.”
  4. Augmented reality. “AI-powered augmented reality technology could be used to create interactive legal guides that clients can access on their smartphones or other devices. This could provide clients with instant access to legal information and guidance and help them navigate legal processes more easily.”
  5. Emotional intelligence. “AI could be used to enhance emotional intelligence in pro bono legal services. For example, an AI-powered chatbot or virtual assistant could be programmed to detect and respond to emotional cues in client communications, providing compassionate support and guidance to clients who are dealing with difficult legal issues.”

Now these are things I probably would not have thought of without AI. (Note: I’m quoting ChatGPT above. I have no idea if it plagiarized the information.)

As the Technologies We Use Grow More Complex, So, Too, Do Our Ethical Risks

That our personal information has become currency is evident from all those who are trying to take it from us. How many of us don’t even answer our phones anymore? Well, new technologies exacerbate privacy risks because we use them without being able to know all the ways—sometimes even any of the ways—our information is being used. This presents special professional risks for lawyers.

We’re all throwing the dice, playing the game, moving our pieces around the board, but if there’s a problem, the lawyer is the only person that has read the inside of the top of the box. —Jerry Seinfeld

When you sign into ChatGPT, you have to click through a warning that conversations may be reviewed by other people and an advisement not to share any sensitive information. So, let’s say you want to use it to help you create the Best Brief Ever. If what you input to ChatGPT refers to a confidential matter or communication, you might have just waived attorney-client privilege and breached Model Rule 1.6 if you did not have consent from your client to reveal the matter.

Suppose you copy the text of a drafted contract into ChatGPT. It finds several issues, you revise the contract, and ChatGPT produces a New and Improved version of it. Well, Lance Eliot of Forbes notes,

Behind the scenes and underneath the hood, the contract might have been swallowed up like a fish into the mouth of a whale. . . . Your prompt as provided to the AI app is now ostensibly a part of the collective in one fashion or another. . . . The outputs of ChatGPT are also a type of content that can be retained or otherwise transformed by the AI app.

Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes, Jan. 27, 2023).

That the apps, programs, and software we use are stealing our personal information from us is one thing. It is quite another for the apps to obtain our clients’ confidential information—from us! Attorneys must remain mindful when using ChatGPT to assist in client representation. As Eliot notes, “Yikes, you might have innocently given away private or confidential information. Not good. Plus, you wouldn’t even be aware that you had done so. No flags were raised. A horn didn’t blast. No flashing lights went off to shock you into reality.” (Id.)

Attorneys, furthermore, must notify their clients when they find out confidential information has been released without consent. (See ABA Form. Op. 483 regarding attorney duties to avoid data breaches and to notify current and former clients when data breaches are discovered.)

Having to inform clients that data breaches occurred by trying to use AI would potentially be not just costly but also rather embarrassing. Imagine what you would say. Your dog climbed on the keyboard?