Bar Association Guidance on Application of Ethical Rules to GenAI
Bar associations are gradually issuing guidance on how the ethical rules should be applied to the use of GenAI:
- California was the first, issuing “Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law” on November 16, 2023.
- The Florida State Bar issued Ethics Opinion 24-1 on January 19, 2024; and, on August 29, 2024, the Supreme Court of Florida amended comments to Rules 4-1.1, 4-1.6, 4-5.1, and 4-5.3, “adding a warning about the necessity to take care in using generative artificial intelligence.”
- New Jersey issued its “Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers” on January 24, 2024.
- The New York State Bar Association issued “Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence” on April 6, 2024, a portion of which discussed ethical considerations.
- The D.C. Bar Association issued Ethics Opinion 388, “Attorneys’ Use of Generative Artificial Intelligence in Client Matters,” in April 2024.
- The Pennsylvania Bar Association Committee on Legal Ethics and Professional Responsibility and the Philadelphia Bar Association Professional Guidance Committee issued Joint Formal Opinion 2024-200, “Ethical Issues Regarding the Use of Artificial Intelligence,” in May 2024.
- And the American Bar Association issued on July 29, 2024, Formal Opinion 512 on Generative Artificial Intelligence Tools, identifying ethical issues involving the use of GenAI and offering general guidance “for lawyers attempting to navigate this emerging landscape.”
While the GenAI technology is new and uncharted, a consensus is forming that, from an ethical perspective, we already have the tools we need to deal with GenAI in the practice of law, and our core ethical responsibilities are unchanged. New Jersey’s Preliminary Guidelines, for example, state that “[t]he core ethical responsibilities of lawyers . . . are unchanged by the integration of AI in legal practice, as was true with the introduction of computers and the internet. AI tools must be employed with the same commitment to diligence, confidentiality, honesty, and client advocacy as traditional methods of legal practice.”
ABA Model Rule 1.1: Competence
The starting point for any ethical analysis relating to the use of new technologies is, of course, competence. Under ABA Model Rule 1.1, “[a] lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” Comment 8 was added in 2012 to state that the duty of competence requires lawyers to keep abreast of “the benefits and risks associated with relevant technology.”
The ABA states in Formal Opinion 512: “To competently use a GAI tool in a client representation, lawyers need not become GAI experts. Rather, lawyers must have a reasonable understanding of the capabilities and limitations of the specific GAI technology that the lawyer might use.” The California State Bar’s Practical Guidance also fleshes out what competence means as it relates to the use of GenAI: “Before using GenAI, a lawyer should understand to a reasonable degree how the technology works, its limitations, and the applicable terms of use and other policies governing the use and exploitation of client data by the product.” The ABA reinforces that “[t]his is not a static undertaking. Given the fast-paced evolution of GAI tools, technological competence presupposes that lawyers remain vigilant about the tools’ benefits and risks.”
The majority of state bars’ guidance makes clear that the human element remains essential to the ethical practice of law and that a lawyer cannot delegate his or her professional judgment to a GenAI tool. California states, “Overreliance on AI tools is inconsistent with the active practice of law and application of trained judgment by the lawyer,” and “[a] lawyer’s judgment cannot be delegated to GenAI and remains the lawyer’s responsibility at all times.” New York asserts, “AI programs that do not involve a human-lawyer in the loop in providing legal advice arguably violate the rules and may be considered [the unauthorized practice of law].” And Florida concurs: “First and foremost, a lawyer may not delegate to GenAI any act that could constitute the practice of law such as the negotiation of claims or any other function that requires a lawyer’s personal judgment and participation.” The ABA agrees:
While GAI may be used as a springboard or foundation for legal work—for example, by generating an analysis on which a lawyer bases legal advice, or by generating a draft from which a lawyer produces a legal document—lawyers may not abdicate their responsibilities by relying solely on a GAI tool to perform tasks that call for the exercise of professional judgment.
GenAI and the Duty of Competence
So, what does the duty of competence mean when it comes to GenAI?
Is This GenAI Tool the Right Tool for the Job?
The first level of competence relates to whether it makes sense to use a GenAI tool in the first place. The old adage “When you have a hammer, everything starts to look like a nail” is apt here. Before using a particular GenAI tool, you need to ask yourself what task you are trying to complete and whether this GenAI tool is the right tool for the job. Some pitfalls for the unwary include the following:
- Perhaps the most important thing to understand is that GenAI tools are not a search engine like Lexis, Westlaw, or even Google. They generate new content in response to queries. As the D.C. Bar explains,
Lawyers should understand that GAI products are not search engines that accurately report hits on existing data in a constantly updated database. The information available to a GAI product is confined to the dataset on which the GAI has been trained. That dataset may be incomplete as to the relevant topic, out of date, or biased in some way. More fundamentally, GAI is not programed to accurately report the content of existing information in its dataset. Instead, GAI is attempting to create new content.
- Are you looking for existing case law bearing on a particular legal issue? If so, a GenAI tool is not the appropriate tool for your job, and you are better served by turning to a search engine such as Westlaw or Lexis.
- GenAI tools generate new content that is similar, but not identical, to things they have seen before. Are you dealing with a novel issue or fact pattern? If so, then GenAI may not be the right tool for your purpose.
- GenAI currently is best at low-complexity tasks where precision is less important than creativity. Does that describe the task you want your tool to perform? If not, then GenAI may not be the best option for you.
- If it is a litigation task you are working on, is this GenAI tool specifically trained for litigation tasks? If not, the output it generates for you may not be relevant, and it will take a lot more work to get useful output.
What Risks Accompany the Use of This GenAI Tool?
An additional level of competence relates to your understanding of the GenAI tool you are proposing to use, and in particular the risks that it poses to you or your client. A plethora of federal, state, and international regulations bear on the use of GenAI, and the regulatory landscape is constantly evolving as the world’s leading nations are battling for influence in the AI sector. The details of those regulations are far beyond the scope of this article, but there are two general themes behind most of the regulations:
- How deeply do you understand the technology before you deploy it?
- Are you using the technology responsibly?
A sampling of the risks to be considered in the use of GenAI tools include the following:
- Privacy. This includes not only personally identifiable information (PII) and information protected by the Health Insurance Portability and Accountability Act (HIPAA) but also confidential client information (see ABA Model Rule 1.6), all of which can have repercussions regarding the need for disclosure and/or informed consent. There are three broad components to any GenAI model, any of which could implicate confidentiality and privacy concerns: (1) the inputs, which are the data and information fed into the AI tool; (2) the prompts, which are the queries or assignments you give the AI tool; and (3) the outputs, which are what the AI tool generates in response to your queries. A plethora of state, federal, and international privacy regulations (including, but not limited to, the EU General Data Protection Regulation (GDPR)) could be implicated, depending on the data at issue. Use of a GenAI tool in litigation could have repercussions for confidential information covered by protective orders as well. Before feeding any kind of confidential information or attorney work product into a GenAI tool, it is wise to understand where your data is going and the uses to which it can be put once it enters the GenAI model.
- Intellectual Property. The risks here include whether the proposed GenAI tool was trained on databases containing, for example, works that are entitled to copyright protection and that cannot be used without the author’s permission; and, if no permission was obtained, whether the use of the tool qualifies as “fair use.” In addition, what intellectual property protection (if any) will apply to the output generated by the GenAI tool, and who will own the rights to it?
- Inadvertent Inaccuracy Due to Bias and Hallucinations. As a number of lawyers have learned to their dismay and professional embarrassment (e.g., Mata v. Avianca, Inc., 2023 WL 4114965 (S.D.N.Y. 2023)), GenAI tools can generate outputs that contain wrong answers. Any number of factors could contribute to getting the wrong answer: (1) the dataset used to train the GenAI tool could be incomplete or inaccurate; (2) the prompt, or query, was flawed in that you did not provide sufficient information, criteria, or guardrails for the GenAI tool to understand what you were looking for; or (3) because the GenAI tool is predicting the answer rather than providing the answer (remember, it is not a search engine), it can make up facts that appear to be real.
The ABA describes this inherent risk eloquently in Formal Opinion 512:
The large language models underlying GAI tools use complex algorithms to create fluent text, yet GAI tools are only as good as their data and related infrastructure. If the quality, breadth, and sources of the underlying data on which a GAI tool is trained are limited or outdated or reflect biased content, the tool might produce unreliable, incomplete, or discriminatory results. In addition, the GAI tools lack the ability to understand the meaning of the text they generate or evaluate its context. Thus, they may combine otherwise accurate information in unexpected ways to yield false or inaccurate results. Some GAI tools are also prone to “hallucinations,” providing ostensibly plausible responses that have no basis in fact or reality.
In the case of Mata v. Avianca, counsel used ChatGPT to draft a brief, complete with cited “cases” that turned out to be complete fabrications. The lawyer explained that his prompts had included such directions as “provide case law,” “show me specific holdings,” “show me more cases,” and “give me some cases,” and he had not realized that the chatbot’s cited cases were made up. But the court took the view that ignorance of the error is no excuse, noting that Federal Rule of Civil Procedure 11 imposes an affirmative duty on counsel to read the cited cases and conduct a reasonable inquiry into the viability of a filing before it is signed. Had counsel done so, he would have seen that his cited cases did not exist.
The ABA’s Formal Opinion 512 reinforces this warning:
Because GAI tools are subject to mistakes, lawyers’ uncritical reliance on content created by a GAI tool can result in inaccurate legal advice to clients or misleading representations to courts and third parties. Therefore, a lawyer’s reliance on, or submission of, a GAI tool’s output—without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation as required by Model Rule 1.1.
Joint Formal Opinion 2024-200 states succinctly, “[W]hether a baseless argument is made with the assistance of AI or not is irrelevant; the lawyer is responsible.”
- Deepfakes and Other Intentional Frauds. Unfortunately, GenAI tools can be used to create outputs that are deliberately misleading. From an evidentiary perspective, deepfakes create challenges of authentication of evidence: is this exhibit what it purports to be? From an ethical perspective, deepfakes raise the issue of an attorney’s duty of candor to the tribunal (ABA Model Rule 3.3) and duty of fairness to opposing counsel (ABA Model Rule 3.4).
- Vendor Contracts and Terms of Use. It is useful to think of the GenAI tool you are considering as a third party; you are outsourcing when you use it, so it is wise to treat it as such, and to conduct due diligence into both the vendor and the technology. It will not be enough to say that the GenAI tool is a “black box.” You will be expected to drill down beyond sales puffery around the GenAI tool to understand, at least to a reasonable degree, what the tool does and does not do, what protections it does and does not offer, and what your reasonable expectations of this tool might be.
- Insurance Policies. Are there any exclusions in your insurance policies that could lead to unfortunate surprises if something goes wrong with your GenAI tool?
Any one of these questions and risks could itself be the subject of an article. This is intended only as an overview of the many considerations that lawyers should take into account when contemplating the use of a GenAI tool, either by or on behalf of a client. As the ABA’s Formal Opinion 512 states, “With the ever-evolving use of technology by lawyers and courts, lawyers must be vigilant in complying with the Rules of Professional Conduct to ensure that lawyers are adhering to their ethical responsibilities and that clients are protected.”