Confidentiality Implications and Disclosure of AI Use
One primary ethical obligation is to maintain client confidentiality. Rule 1.6 of the ABA Model Rules of Professional Conduct prohibits lawyers from revealing “information relating to the representation of a client unless the client gives informed consent.” The use of AI poses the risk of “inadvertently waiving attorney-client and attorney work product privileges.”
AI uses personal information and data entered into it to enhance its algorithm and “learn” to give ever more accurate information. Gai Sher & Ariela Benchlouch, “The privacy paradox with AI,” Reuters, Oct. 31, 2023. AI raises many questions: Where is the information provided to the AI program stored, and who can access it? Can hackers access the information and sell it or use it for nefarious purposes? Does confidential or proprietary information lose its status when uploaded to an AI platform? “Disclosing” such information to an AI platform poses an ethical dilemma. What if the information can be reverse-engineered to learn what was disclosed to the AI program and show that the upload to the AI platform constitutes a waiver of confidentiality?
These questions have caused some state bar associations to consider whether lawyers should be required to disclose their use of AI to clients and obtain their consent. The Florida bar association issued a proposed advisory opinion recommending that lawyers obtain consent from their clients before using AI tools. The bar association also considered whether lawyers should lower fees for tasks carried out using AI.
Whether consent should be required, and in what form, remains undecided. Is it sufficient to merely mention in an engagement agreement that AI may be used? Or must lawyers obtain specific consent for each task involving AI?
Whether lawyers must disclose their use of AI to courts has been answered in the affirmative by several federal judges. In June 2023, Judge Stephen Vaden, of the U.S. Court of International Trade, began requiring lawyers using AI to file a notice disclosing which AI platform they used and, if used in a brief or other document submitted to the court, for which part of the brief or document AI was used. In addition to this notice of disclosure, Judge Vaden requires lawyers to certify that in using the AI technology, they did not disclose any confidential information.
Similarly, Judge Brantley Starr, a federal judge in Texas, requires lawyers to certify that, if AI was used to draft a document submitted to the court, a human ensured the document’s accuracy. Judge Starr recognized that AI platforms can be a helpful and powerful legal tool but warned that they are not yet at the developmental point where they should be used for drafting legal briefs. Further, the judge juxtaposed work produced by a human versus a computer, pointing out that lawyers are held to a set of ethical standards and rules, while AI platforms are not.
Ensuring Accuracy in Legal Research
A major concern about AI is inaccuracy. Where the technology stands today, human input is essential to ensure that AI-generated information is actually true.
In June 2023, a cautionary tale of overreliance on AI surfaced when two New York lawyers were ordered to pay a $5,000 fine after submitting a brief that included citations generated by AI. The court discovered that the cases the attorneys cited were not only inaccurate but actually illusory; they did not exist at all. It turned out that the AI bot they used, ChatGPT, fabricated the cases altogether in order to fit the questions it was being asked. While the lawyers involved argued that they acted in good faith, trusting too much in the new technology, the court disagreed, finding that the lawyers committed “acts of conscious avoidance” in failing to ensure the accuracy of the statements made to the court.
New York Rule of Professional Conduct 3.3 governs attorneys’ conduct before a New York tribunal. This rule states that “a lawyer shall not knowingly make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer.” Here, the lawyers alleged that they did not knowingly make a false statement of law to the court in their brief. The court rejected the argument, finding that the lawyers had an independent responsibility to ensure that what they submit to a court is true and accurate. This appears to be the beginning of a trend; future courts will no doubt find that lawyers using AI should know the risk of inaccurate information being generated and should always check the accuracy of any statements made to the court.
On January 30, the Second Circuit referred to the court’s Grievance Panel an attorney who, in a reply brief, cited a nonexistent case generated by ChatGPT. Park v. Kim, No. 22-2057 (2d Cir. Jan. 30, 2024). The court found that the attorney’s conduct fell “below the basic obligations of counsel” and that this constituted a violation of New York Rule of Professional Conduct 3.3(a), which prohibits lawyers from knowingly making a false statement of law to a tribunal. Further, the court found that the attorney was in violation of Rule 11 of the Federal Rules of Civil Procedure in that she did not “confirm the existence and validity of[] the legal authorities on which [she relied].” The court went so far as to say that while many courts and bar associations have enacted local rules to address the use of AI in the practice of law, such rules should not be necessary “to inform a licensed attorney” that she has a duty to “ensure her submissions to the Court are accurate.”
With the misuse of AI growing, the New York State Bar Association (NYSBA) created an AI task force to develop rules to ensure the ethical use of AI technology in the legal profession. David Alexander, “New York State Bar Association Task Force To Address Emerging Policy Challenges Related to Artificial Intelligence,” NYSBA News Center, July 17, 2023. The NYSBA noted that not all use of AI is bad; it can actually provide some great benefits to the legal community in that it can assist clients with legal questions more efficiently and allow lawyers to conduct research more quickly.
In California, lawyers may look to a newly added comment to Rule 1.1 of the Rules of Professional Conduct for guidance on how they should responsibly use AI. The comment states that lawyers are required “to keep abreast of the changes in the law and its practice, including the benefits and risks associated with relevant technology.” The comment highlights a lawyer’s duty of competence and imposes on lawyers the duty to understand the developing AI technology before using it in their practice.
Fees for Using AI
In general, AI has been touted as a time-saving legal tool, when used responsibly. It can lead a lawyer directly to the answer to a research question, saving the lawyer hours of reading and analyzing precedent and statutes. AI can find in seconds an answer that would take hours of scouring LexisNexis or Westlaw, trying different combinations of keywords to come up with the right case. It can also generate the first draft of a document that might otherwise take a human several hours to complete. The speed at which AI can work—as opposed to a human—saves time and thus money for the client, as the hours being billed will naturally decrease with the use of AI.
Most state bars have ethical rules regarding how lawyers may charge clients. Rule 1.5 of the ABA Model Rules of Professional Conduct prohibits lawyers from charging “an unreasonable fee or an unreasonable amount for expenses.” Reasonableness depends on “the time and labor required, the novelty and difficulty of the questions involved, and the skill requisite to perform the legal service properly.”
The use of AI might prompt clients to wonder whether they should be charged the same amount for a task completed by AI as they would if that same task had been completed by their lawyer, because there is presumably less professional skill needed for research or drafting a document if AI does it for you. It is also possible that a lawyer who decides to personally do work that could be performed by AI—and charges the lawyer’s hourly rate—might be considered unethical in billing clients at the higher rate. Brad Hise & Jenny Dao, “Ethical considerations in the use of AI,” Reuters, Oct. 2, 2023. Should there be an adjusted hourly billing rate for tasks that could be performed by an AI tool as there might be if a lawyer performs tasks that would ordinarily be completed by a paralegal?
While courts and state bar associations have not yet commented on how law firms should bill their clients when using AI, it will become a hot topic as AI use becomes commonplace.
Training in AI
The continued use of AI in the legal industry is sure to bring with it the need for lawyers to be trained to use the technology competently. Some states have already offered optional continuing legal education (CLE) courses on AI-related topics. As of the date of this article’s publication, California is the only state to consider requiring lawyers to participate in some sort of training in AI before using it in practice.
The California bar association’s ethics committee recommended that lawyers undergo enhanced training to use AI in their practice. It has been recommended that the bar implement a mandatory one-hour CLE requirement to teach lawyers about AI, that the legislature and bar develop statutes and rules concerning the use of AI, that rules be adopted regarding the use of AI in law schools and during the bar exam, and that a guidebook be published. Amy Jane Longo, Shannon Capone Kirk & Isaac Sommers, “Bar Associations Begin to Tackle AI & the Practice of Law,” Bloomberg Law, Dec. 2023.
An AI Perspective
But what do the generative AI platforms have to say for themselves and the risk of using their platforms? We asked the machine and offer a truncated version of its response for illustrative purposes only.
Q: In light the above digest and analysis of several potentially applicable model rules, do you think it’s too risky for a lawyer to use generative AI?
ChatGPT:
The decision for a lawyer to use generative AI should be approached with careful consideration and risk assessment. While generative AI can offer valuable assistance in legal tasks, its adoption comes with inherent challenges that need to be addressed to mitigate risks. . . . Lawyers considering the adoption of generative AI should invest in training, stay informed about advancements, and implement robust safeguards to protect the interests of their clients and uphold the standards of the legal profession. . . .
The future of legal practice may well be shaped by the synergy between legal professionals and AI tools, with ethical considerations serving as the compass guiding this evolution. In embracing generative AI, lawyers have an opportunity to enhance efficiency, provided they navigate the ethical dimensions with vigilance and a commitment to upholding the highest standards of the legal profession.