The fulcrum for our discussion will be the American Bar Association's recent Formal Opinion 512, issued on July 29, 2024 (“Opinion 512”). Opinion 512 is practically grounded in the present capabilities of GenAI. To that end, it focuses upon three core issues:
- Lawyers remain fully accountable for all work product, regardless of how it is generated;
- the existing rules of professional conduct are sufficient to govern AI use in legal practice; and
- AI is here and here to stay—it is not going away.
We will also explore formal guidance provided by several other bars—including California, Florida, Kentucky, New York, New Jersey, Pennsylvania, and the District of Columbia (note: some of these links may start the download of a PDF)—and the varying opinions and points of focus presented by each.
Our presentation will delve into specific ABA Model Rules of Professional Conduct and their implications for AI use, generally using the order that these are discussed in Opinion 512.
- Rule 1.1 (Competence) requires lawyers to maintain technological competence. This necessitates a “trust but verify” approach to GenAI outputs that never compromises accountability. Competency with GenAI also means that lawyers need to understand its capabilities and limitations, not in some abstract technical way, but in ways sufficient to comprehend how it could impact their duties as lawyers. To that end, we will discuss how GenAI is not actually intelligent, but instead is simply “applied statistics”; how to leverage the power that this miracle of math provides; and, perhaps most importantly, how to avoid being deceived by AI creators into thinking that an AI tool is somehow a thinking, feeling person just like you.
- Rule 1.6 (Confidentiality) mandates vigilance in protecting client information when using AI tools. Lawyers using GenAI need to understand whether the GenAI systems that they are using are “self-learning” and will thus send information—including confidential client information—as feedback to the system’s main database. Because the vast majority of such systems are self-learning, a healthy skepticism to disclosing any client information to GenAI is critical.
- Rule 1.4 (Communication) may require client consultation about AI use in their matters, particularly when confidentiality concerns arise.
- Rules 3.1, 3.3(a)(1), and 8.4(c) (Meritorious Claims, Candor to the Tribunal, and Misconduct) prohibit the use of AI-generated false or frivolous claims. This once again implicates our first core issue: As the lawyer, you are the one who is accountable, and “I trusted the AI (but forgot to verify)” is not going to be acceptable.
- Rules 5.1 and 5.3 (Supervision of Lawyers and Nonlawyers) may one day raise complex questions of how human-level AI must be properly supervised. But for now, the New York Bar Association’s guidance provides the best set of guidelines (leveraging ABA Resolution 122 from 2019) to avoid letting a GenAI tool supplant the lawyer as the final decision-maker.
- Rule 1.5 (Fees) presents challenges in balancing efficiency gains from AI with ethical billing practices.
- Rule 5.5 (Unauthorized Practice of Law) necessitates vigilance to ensure AI tools do not cross into providing legal advice or exercising legal judgment without appropriate lawyer oversight.
Finally, we will look to the future, beyond the present-focused Opinion 512. As AI capabilities expand, we must all remain vigilant as lawyers in upholding our ethical duties, which are fundamentally rooted in human knowledge, judgment, and accountability. Because, until AI can credibly match such human qualities, it cannot—and should not—be able to claim such ethical responsibilities as, inter alia, attorney-client privilege.