ABA Formal Op. 512 focuses on the risks of using generative AI (GAI) in legal practice, with a key concern being the confidentiality of client information. Under ABA Model Rule 1.6, lawyers are obligated to protect all client-related information, including preventing inadvertent or unauthorized access. ABA Model Rule 1.9(c) extends this duty to former clients, and ABA Model Rule 1.18(b) to prospective clients.
Unauthorized Disclosure of Confidential Information: What Is the Risk with GAI?
Self-learning GAI poses a higher risk to client confidentiality than other technology used in a modern law practice because it can retain and reuse input data (prompts), increasing the chance of inadvertent disclosure or cross-use in other cases. This is true whether the information is used within a firm’s closed system—where the stored data is only used internally—or outside the firm in an open system—where data is shared with external sources.
Why do lawyers need to be concerned about inputting confidential information into an internal firm or “closed” GAI system? The answer lies in the distinction between access to confidential information and the use of that information within a firm. While lawyers and staff typically have access to all of the firm’s clients’ confidential information, using that information to prompt the firm’s self-learning GAI system creates a real risk that one client’s information may be applied to other clients’ cases. This may result in a breach of the confidentiality obligations owed to the first client and could occur without either lawyer realizing that a violation has taken place.
This risk is not just hypothetical. Multiple ethics opinions, including Opinion 512 and those issued by the Florida Bar and Pennsylvania & Philadelphia Bars, emphasize that self-learning GAI tools may inadvertently cause the disclosure of client information even in a closed system used exclusively within a single law firm.