chevron-down Created with Sketch Beta.

ARTICLE

Ethical Considerations of Integrating Generative AI into the Practice of Law

Aleschia Hyde

Summary

  • Law firms are exploring generative AI to enhance efficiency, reduce costs, and improve legal services, but must consider confidentiality, data security, and ethical obligations.
  • Firms must differentiate between internal and external platforms and ensure compliance with rules on client disclosure requirements.
  • Law firms must fairly allocate AI-related costs, avoid unethical billing practices, and disclose AI usage where necessary to maintain transparency and client trust.
Ethical Considerations of Integrating Generative AI into the Practice of Law
PeopleImages via Getty Images

Many lawyers and law firms are exploring the potential of incorporating generative artificial intelligence (GAI) into their practices, hoping to increase efficiency, reduce costs, enhance legal research, and improve client services. Unlike traditional artificial intelligence, GAI creates new content, such as text, images, code, or music, based on patterns learned from existing data. Because GAI primarily analyzes and classifies information, it produces original outputs that mimic human creativity. While the efficiency and innovation brought by GAI tools are undeniable, their use in law firms raises important considerations about confidentiality, data security, and transparency. This article explores what law firms entrusted with highly sensitive client information should consider when integrating GAI systems into their practices without compromising their ethical and legal obligations.

Deciding Whether to Use GAI Platforms

Lawyers must decide whether to incorporate GAI-based platforms into their practices. GAI platforms are artificial intelligence systems designed to generate text, analyze legal documents, draft contracts, conduct legal research, and automate various aspects of legal work. These platforms use advanced machine-learning models, such as large language models, to assist lawyers in tasks that have traditionally required significant manual effort. A competent lawyer under Rule 1.1 of the Model Rules of Professional Conduct is not required to use GAI platforms, but comment 8 of Rule 1.1 emphasizes that lawyers must stay informed about technological advancements such as GAI. In an ever-changing legal landscape, the definition of a competent lawyer may soon hinge on the lawyer’s ability to leverage GAI platforms effectively.

What does that mean for law firms, both big and small? Model Rule 1.3 demands that law firms and the lawyers within them act with diligence and promptness in representation, including conducting the cost-benefit analysis of using GAI platforms. Beyond a law firm’s internal assessment, under Model Rule 1.4, law firms should determine whether it is necessary to communicate with clients about their decision not to use GAI, particularly if competitors are using it. Because Model Rule 1.2(a) requires lawyers to consult clients on the means used to achieve legal objectives, should clients have a say in the use of GAI platforms if it enhances the law firm’s representation?

There is no right answer—yet. Law firms should understand the requirements of their jurisdictions and make a specific, fact-based assessment of whether to disclose to clients their reasons for choosing not to use GAI.

Considerations When Integrating GAI Platforms

If a law firm decides to integrate GAI into its practice, it must consider whether the GAI platform is secure. For instance, for GAI platforms requiring client data input, lawyers must understand how that information is processed and whether third parties can access it. The ABA’s Standing Committee on Ethics and Professional Responsibility’s Formal Opinion 512, Generative Artificial Intelligence Tool, dated July 29, 2024, underscores the need for lawyers to evaluate risks, including potential disclosure, unauthorized access, and the adequacy of security safeguards.

Law firms use two types of GAI platforms: external and internal. External GAI platforms are AI tools operated by third-party providers, often cloud-based services such as OpenAI’s ChatGPT, Google’s Gemini, or other commercial AI programs. Internal GAI platforms, on the other hand, are AI tools developed or hosted within the law firm’s secure information technology infrastructure, meaning the data remain under the firm’s control. Each poses its own set of issues when integrated into a law firm.

When lawyers input client information into an external AI platform, the data may be processed, stored, or even used to improve the AI model, potentially exposing confidential client information. Sharing client data with an external GAI platform risks violating Model Rule 1.6(a), which prohibits revealing client information without informed consent, and Model Rule 1.6(c), which mandates reasonable efforts to prevent unauthorized disclosure.

To mitigate these risks, the Standing Committee on Ethics and Professional Responsibility cautions lawyers to assess AI platform security before inputting client data. This includes reviewing the GAI platform’s terms of use, privacy policies, and specific contract provisions, and consulting cybersecurity professionals if necessary. However, if a GAI platform retains or uses client data to inform future responses, there is a risk of exposing sensitive client information and violating the rules of ethics and professional responsibility.

The safer approach is using an AI tool restricted to the law firm’s internal network (an internal GAI platform). However, even within the firm, lawyers must ensure that unauthorized colleagues cannot access confidential client information, especially when ethical screens are in place. An ethical screen is a procedural safeguard used to prevent conflicts of interest by restricting access to confidential information by lawyers and staff members who have a conflict of interest in a particular case or client matter. Therefore, when ethical screens are required under Model Rules 1.7, 1.9, and 1.10, firms must prevent GAI platforms from exposing restricted client data to lawyers who should not have access to it.

Next, a law firm must consider whether it is necessary to inform a client before using a GAI platform and determine whether client consent is required. According to Formal Opinion 512, disclosure is required in the following circumstances:

  • A client explicitly asks whether AI was used.
  • The engagement agreement or client guidelines require transparency.
  • AI influences a significant decision, such as litigation strategy.
  • The use of AI affects the reasonableness of legal fees.
  • A client hires a lawyer based on the lawyer’s unique expertise, and undisclosed AI use may conflict with the client’s expectations.

Even when disclosure is not expressly required, informing clients about GAI platform usage may be in the best interest of all parties and can foster trust. The retention agreement is the ideal place to outline the firm’s GAI policies and incorporate client preferences.

Factoring GAI Platform Costs into Client Billing

Law firms must be transparent, fair, and reasonable in how they charge for AI-assisted services. Model Rule 1.5 states that any expense charged to a client must be reasonable. Although reasonableness, like much of the law, is a fact-specific inquiry, here are a few considerations:

  • Is the GAI platform part of the firm’s general operational infrastructure, like case management software? If so, the cost may be considered an overhead expense, making a separate charge to clients inappropriate.
  • What is the purpose of the software? If the AI platform is more than a routine software tool (e.g., a tool to check spelling) and more akin to a contract analysis service that incurs per-use costs, passing on fees to the client may be justified.
  • Any costs passed on to the client should be based on direct costs or fair market value, not an arbitrary fee.

Firms must also consider whether the GAI charge should be applied per client or matter. The charge may become unreasonable if clients are billed multiple times for the same GAI service. Likewise, suppose the law firm is profiting from the charge after recouping its investment in the GAI service. In that case, the firm should be transparent about this, informing clients about whether the fee reflects development costs or if the firm is generating additional revenue. Under Formal Ethics Opinion 93-379, a firm may charge clients only direct costs plus a reasonable allocation of expenses unless agreed otherwise. Any duplicate charges across multiple clients are unethical.

While GAI platforms present exciting opportunities for efficiency in legal practice, ethical billing requires firms to justify fees, avoid duplicative charges, and clearly disclose AI-related costs to clients.

Conclusion

As generative AI platforms become more embedded in the practice of law, firms must strike a delicate balance between innovation and professional responsibility. Lawyers can harness AI’s benefits while proactively addressing confidentiality risks, ensuring fair billing, and upholding their ethical and fiduciary duties to clients.

This article is based on a panel discussion, Ethics Plenary: The Ethics of Artificial Intelligence, held at the ABA’s 2025 Joint CLE Program sponsored by the Litigation Section’s Environmental & Energy, Mass Torts, and Products Liability Litigation Committees in Snowmass, Colorado, on January 23, 2025, and the forthcoming article “Legal Ethics in the Use of Artificial Intelligence: An Old Dog Learning New Tricks” by John M. Barkett.

    Author