chevron-down Created with Sketch Beta.

Litigation News

Litigation News | 2024

Artificial Intelligence: Meet the Practice of Law

Steven Bennett Chaneles

Summary

  • State bar promulgates ethics rules to guide attorneys for use of generative AI.
  • The opinion touches on issues such as confidentiality, competency, billing, and advertising.
  • Section leaders offer suggestions on how to use the new technology ethically.
Artificial Intelligence: Meet the Practice of Law
Kilito Chan via Getty Images

Jump to:

Spurred by the introduction of ChaptGPT-3 in 2022, generative artificial intelligence (AI) presents new ethical challenges for lawyers. To meet those challenges, a state bar governing body has issued an ethics advisory opinion to guide lawyers regarding some of the ethical implications of using AI, touching on issues such as confidentiality, competency, billing, and advertising. ABA Litigation Section leaders offer suggestions for how to ethically use this evolving technology.

In a nod to the growing use of AI, the Florida Bar published Ethics Opinion 24-1. According to the opinion, “generative AI can create original images, analyze documents, and draft briefs based on written prompts.” This new guidance, though advisory only, permits lawyers to use AI and applies current ethical rules that already require lawyers “to protect the confidentiality of client information, provide accurate and competent services, avoid improper billing practices, and comply with applicable restrictions on lawyer advertising.” Rather than promulgating new rules, the opinion relies on existing ethics opinions regarding cloud computing, electronic storage disposal, remote paralegal services, metadata, and lawyer competence as analogues.

Meet the New Problems; Same as the Old Problems

Lawyers already have a duty to take reasonable measures to protect the confidentiality of client information and obtain client consent to disclose it. Pursuant to the opinion, that duty now includes maintaining sufficient technological competence to understand the risks of using generative AI and how to ethically use it. Moreover, AI programs may be viewed as non-lawyer assistants, akin to paralegals. Accordingly, lawyers must consider what functions may be ethically delegated to AI and remain responsible for the work product, including reviewing it for accuracy and sufficiency.

A lawyer must ensure that the charges associated with AI are reasonable and should communicate to a client, preferably in writing, the basis of the lawyer’s fees and costs, including the lawyer’s intent to use generative AI and charge for it. Further, lawyers must not engage in deceptive or misleading advertising. AI chatbots that communicate with clients or third parties must comply with restrictions on lawyer advertising and conflicts of interest and must include a disclaimer indicating that the chatbot is an AI program and not a lawyer or employee of the law firm.

Does AI Really Help?

“AI allows quick and effective issue identification, document review and assessment for relevance and privilege determinations, chronology building, and summarizing testimony and other large volumes of information,” explain Francelina M. Perdomo Klukosky, New York, NY and Matthew D. Kohel, Baltimore, MD, Co-Chairs of the AI Subcommittee of the Litigation Section’s Intellectual Property Litigation Committee.

Despite these benefits, there are drawbacks. “I would be concerned about using AI to prepare documents that necessarily involve account numbers and balances that are meant to be kept confidential and not publicly filed with the court, such as demand letters or settlement agreements, and I have similar concerns about potential HIPAA violations,” remarks Naomi M. Berry, Miami, FL, Co-Chair of the Section’s Corporate Counsel Committee.

Other Section leaders echo similar sentiments. “AI can deliver significant efficiencies to clients with proper guardrails, but too many jump in feet first without evaluating the issues,” observes Emily Westridge Black, Austin, TX, Co-Chair of the Section’s Privacy & Data Security Committee. “Part of the problem is asking AI to check itself, so if you get unsatisfactory answers, you need to have checks outside the loop instead of relying on AI to assure correctness,” she cautions.

Section leaders provide cautions regarding the use of AI, across a broad range of issues, even before the creation of an attorney-client relationship, and encourage transparency with clients. “The use of AI tools for client intake could lead to the creation of an attorney-client relationship without the lawyer’s knowledge or the GenAI tool using abusive, discriminatory, or otherwise inappropriate language,” Klukosky and Kohel warn. “Use of self-learning generative AI tools trained on client data raises the possibility that confidential client information may be stored within the program and revealed in response to inquiries by third parties,” they add.

In that regard, the opinion directs lawyers to understand an applicable AI program’s policies on data retention, data sharing, and self-learning, and consider obtaining the affected client’s informed consent prior to utilizing a third-party generative AI program. “Lawyers should be transparent with how bills are derived and have an upfront conversation with clients about why and how AI is being used and what portion of the bill is attributable to that use,” notes Black.

Will Generative AI Impair Professional Skills Development?

Generative AI presents a new landscape with new challenges for young lawyers. “How do you get analytic chops without doing the grunt work?” offers Black. “We will find ways to deal with it as we did in transition from book research to online research and paper to paperless, but we will need to be intentional about assisting young lawyers,” she suggests. “New as of yet unidentified skills will be learned, as was the case with e-discovery, and a new skill set will develop that will permit developing and querying AI, creating a new way people can add value,” concludes Black.

Resources

    Author