chevron-down Created with Sketch Beta.

Law Practice Magazine

The Finance Issue

Ethical Issues When Incorporating AI Into Law Firm Marketing

Micah U Buchdahl

Summary

  • Learn the ethical considerations for using AI in law firm marketing.
  • Understand which rules of professional conduct might apply when using AI.
  • Learn some tips for avoiding missteps when incorporating AI into a law marketing effort.
Ethical Issues When Incorporating AI Into Law Firm Marketing
iStock.com/Philip Thurston

Jump to:

Many believe that it will be the ethical considerations that serve as the compass for the artificial intelligence (AI) evolution in the practice of law. While we’ve all been inundated with content on the subject matter, my focus here is on the impact of those issues as it relates to law firm marketing and business development efforts.

There are opportunities to use AI for nearly all types of content creation. From blog posts to RFP responses, drafts of articles to press releases. I was somewhat shocked when I asked Microsoft Copilot to give me a biography of myself—it was not only 100 percent accurate, but better than my handcrafted version.

Many U.S. courts have already addressed AI use in various forms and fashions. Some don’t want to discourage use; others have given stark warnings about disclosures. In an April 2024 Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence, no fewer than 15 Rules of Professional Conduct (RPC) are cited, including those related to advertising and solicitation. State bars are mixed thus far in whether lawyers should be required to disclose their use of AI to clients and obtain their consent.

In just the New York State Bar Association AI guidelines, consider this checklist of how many of the RPC rules can come into play simply in the marketing space:

  • 1.1       Competence—know how AI works
  • 1.2       Scope of representation—include in engagement letter?
  • 1.3       Does it aid in representing your client?
  • 1.4       Communication—don’t just rely on AI to communicate
  • 1.5       Fees—excessive fees if you would/could use AI; add a surcharge?
  • 1.6       Confidentiality
  • 1.7       Conflicts
  • 5.1       Supervisory responsibilities; oversight
  • 5.2       Subordinate lawyers
  • 5.3       Responsibility for nonlawyers
  • 5.4       Professional independence/judgment—can’t just rely on tools
  • 5.5       UPL (unauthorized practice of law)—are you “aiding” a nonlawyer through AI use?
  • 6.1       Pro bono—can really enhance these services?
  • 7.1       Advertising—false, deceptive or misleading; or violates an ethics rule
  • 7.3       Solicitation—auto calls, chat board posts
  • 8.4       Misconduct—using others to engage in conduct you can’t

In a Florida Bar advisory opinion issued earlier this year, Opinion 24-1 points to four ethical caveats in utilizing generative AI—confidentiality, lawyer oversight, legal fees and costs and advertising.

I don’t personally know a single law firm that is not investing in generative AI in some capacity. Most simply choose not to talk about it—and in some cases for good reason, because it can shine a spotlight for clients, judges and opposing counsel as to what you may or may not be doing with it. Most of the proprietary programs I’ve seen from firms are tweaked versions of products such as ChatGPT. From the marketing perspective, you need to be careful—as one general counsel told me recently, “Why have our bills not decreased if you’ve got this magical stuff at your disposal?” He told me to call him when an invoice that might’ve once been $75,000 is now $50,000 because of AI.

Who Am I Speaking With?

As is often the case with law marketing, plaintiffs’ firms are ahead of the curve. Many have been using these AI tools for years on the marketing side of the practice.

The Florida advisory opinion reminds attorneys to be careful when using generative AI chatbots for advertising and intake purposes, as the lawyer will be ultimately responsible should the chatbot provide misleading information to prospective clients or communicate in a manner that is inappropriately intrusive or coercive. It reminds law firms to inform prospective clients that they are communicating with a chatbot.

A well-structured chatbot is tough to distinguish from a living human. I sometimes find myself asking the bot if they are a real person or not. Regardless, it is up to the law firm to make sure the bot is providing screening questions to protect the firm from disclosing things it should not and weeding out those already represented by counsel.

If you are using chatbots, consider doing the following:

  • Disclose that you are using AI, and the end user is not talking to a real person.
  • Be careful not to provide “legal information” or “legal advice.”. It is important to “hand off” your intake to a professional at the proper time, without saying too much or failing to disclaim properly.
  • Be sure to build in hard stops when an inquiry becomes too complex.
  • Test conversations from time to time to ensure the aforementioned is all properly covered. It is amazing how many firms don’t do the occasional quality assurance to make sure everything is working smoothly.

If your marketing team is using generative AI to draft content (and let’s face it, at this point who isn’t?), what are some of the issues you might need to concern yourself with?

Am I Seeing Things?

Like the mirage of water in the desert, there is the concept of hallucinations. To put it simply, AI just makes it up. There is little rhyme or reason to explain it. Some of the content might be utilizing case law or statutes that don’t necessarily exist in real life. The onus is on you to ensure that the content you are publishing is truthful and accurate.

In addition, AI might kick back to you inappropriate or outdated terminology. There are deep fakes, and simply fake citations. If you are using AI to create a blog post, you may need to pay attention to intellectual property and copyright infringement issues, in addition to potential plagiarism or duplicative content.

Don’t Cross That (State) Line

In my marketing ethics compliance practice, one of my primary concerns in advertising campaigns revolves around UPL. Does that AI-generated content perhaps put you into an unlicensed jurisdiction, or subject to a states’ advertising rules? I don’t see AI identifying this important concern. You can expect some states to require disclosure in advertising for use of AI-generated people, images and voices.

Understand Your Own Firm’s AI Guidelines

Like email, social media and so many things before it, law firms are often all over the place when it comes to figuring out their own internal rules and regulations. It is important to look at your own policies.

  • What is the firm’s general policy on AI?
  • How does your firm’s AI platform distinguish or differentiate between internal and external data. In other words, might your AI results draw directly from a privileged work in your system?
  • What do you disclose to clients when it comes to the use of AI?
  • Does your firm require client consent for AI usage?

Be sure to review all AI-generated content for possible marketing ethics missteps, such as misleading (and/or deceptive) statements, predictions of success, improper comparisons, a need for particular disclaimer language or crossing the line into providing “legal advice.”

Outside of the ethical considerations, if you are interested in learning more about how to put AI into use, read Ken Chan’s article, “Transforming Legal Marketing With AI” in our sister publication, Law Practice Today. The ABA website offers a significant library of programs and publications to guide you through the next “big thing” on the internet frontier.

    Author