chevron-down Created with Sketch Beta.

Probate & Property

March/April 2025

Last Word - Let’s Be Careful Out There! Words to Live by When It Comes to Generative AI Words to Live by When It Comes to Generative AI

Mark R Parthemer

Summary

  • AI can help attorneys write and even rewrite, but some concerns exist.
  • Lawyers can potentially get into trouble for using tools like ChatGPT, particularly if they use it in ways that violate legal ethics or professional standards.
Last Word - Let’s Be Careful Out There! Words to Live by When It Comes to Generative AI Words to Live by When It Comes to Generative AI
mikkelwilliam via iStock / Getty Images Plus

Jump to:

In the 1980s crime drama Hill Street Blues, Sergeant Phil Esterhaus’s morning roll call catchphrase, “Let’s be careful out there,” rings true in many circumstances and undoubtedly applies to generative artificial intelligence (AI). How is an attorney to approach AI?

Let’s harken back to the good old days of the 1970s. The first hand-held calculator was introduced in the US in 1971. Schools wrestled with how to adopt its usage. It was sometimes banned, sometimes fully embraced, and occasionally allowed only for limited functions. AI is on a similar glide path. There are five types of AI:

  • Reactive AI: Processes data along specific rules or algorithms; pattern recognition or mimic. Examples: Autofill for online forms and chess game apps.
  • Artificial narrow intelligence: Performs a single task well but nothing else. Examples: Siri and Alexa.
  • Limited memory AI: Stores and uses past data to make decisions and improve performance over time. Example: Self-driving cars.
  • Predictive AI: Uses statistical analysis and machine learning to predict future events. Examples: Amazon and Netflix.
  • Generative AI: Originates with the release of ChatGPT on November 30, 2022. Uses generative models to create new content, including text, images, videos, and audio. Often based on large amounts of data such as large language models. Examples: In Excel, when I err on a formula, it offers, “May I fix that for you?” When asked, some software will create content, such as writing an article or comparing items.

I recently was asked to find the similarities between wealth management (sponsoring organization) and how nutrition can ward off diminishing mental capacity (the speaker’s topic). Within seconds, AI provided five examples, three of which I adopted and expounded upon. AI can help write and even rewrite, but what are some concerns?

My prompt to ChatGPT, “Do you hallucinate?” produced the following:

Yes, AI models like me can “hallucinate” in the sense that we sometimes generate information that is inaccurate, misleading, or entirely fabricated, even though it might sound plausible. This happens because my responses are based on patterns in the data I was trained on, and I don’t “know” things like humans do. Instead, I generate responses based on probabilities derived from a vast amount of text.

My prompt to ChatGPT, “Can lawyers get in trouble for using ChatGPT?” produced the following:

Yes, lawyers can potentially get into trouble for using tools like ChatGPT, particularly if they use it in ways that violate legal ethics or professional standards. Some of the main risks include:

1. Confidentiality Concerns: Lawyers are bound by strict rules regarding client confidentiality. They risk violating this duty if they input sensitive client information into an AI tool because unauthorized parties could access or store the data.

2. Accuracy of Legal Advice: ChatGPT can provide information but is not a substitute for legal advice. A lawyer relying on ChatGPT for legal research or advice and making decisions based on incorrect or incomplete information could harm their client and result in professional consequences.

3. Unauthorized Practice of Law: Lawyers could face issues if they use ChatGPT to assist in legal work but do not properly oversee the process. If the tool provides advice that the lawyer doesn’t verify or understand, it could be considered delegating legal work to a non-lawyer, which is problematic.

4. Ethical Obligations: Legal professionals must exercise independent judgment, and relying too heavily on an AI tool may be seen as compromising this duty. Lawyers must also provide competent representation, which includes using the right tools and understanding the associated risks.

5. Malpractice Liability: If a lawyer uses ChatGPT in a way that results in bad advice or errors, they could be exposed to malpractice claims, especially if their client suffers damages as a result.

Those five AI-generated items are written with confidence but may not be accurate. Horror stories abound of attorneys landing in hot water for relying on AI without verifying its outputs. In response, some courts now require affidavits affirming that AI wasn’t used or that a person thoroughly reviewed its outputs before filing. Much like the advent of calculators decades ago, AI is here to stay as a transformative tool. Rather than ignoring it or wishing it away, the savviest approach is to embrace its potential, learning to harness AI to enhance the efficiency and effectiveness of your practice. And with that, “AI rest my case.”

    Author