I shouldn’t have been trying to answer these questions during that exam. However, as the former co-president of both OUTLaws and the Black Law Student Association, I was involved in many conversations about the direction of legal education. I felt a responsibility to think carefully about the tricky issues they raised. In my legal ethics course, we were told AI was advancing rapidly, yet the ABA Model Rules felt outdated compared to the AI-driven changes in law firms. Knowing that the legal world may change in ways people can’t fully predict can be exciting. But it can also be terrifying.
Law School Response to AI
Certainly, law schools and law firms are aware of our new reality. The ABA and its Task Force on Artificial Intelligence recently studied 29 law schools. It found that “more than half, or 55 percent, of the schools offer AI classes, with 62 percent starting the tech classes in the first year of law school.” Last year, I was enrolled in a course called AI and the Law, which explored areas such as autonomous vehicles, medical diagnostic algorithms, criminal sentencing, predictive policing, welfare distribution, consumer manipulation, and content moderation. Currently, I am enrolled in Digital Lawyering: Advocacy and AI, which takes a skill-building approach to advocacy in the age of artificial intelligence.
Nevertheless, uncertainty remains. Northwestern Pritzker School of Law published its generative AI (genAI) policy, which states, “Unless expressly permitted by the instructor, students are prohibited from using genAI to produce, derive, or assist in creating any materials or content that is submitted to the instructor.” It is hard to imagine a policy saying the same thing about a search engine, even though an increasingly popular way to use genAI is to search the web.
Law Firm Response to AI
According to the 2023 Wolters Kluwer Future Ready Lawyer Report, 73 percent of lawyers expect to incorporate generative AI into their work within the next year. Yet, 25 percent still see AI as a threat. In January 2024, the American Lawyer publication released new information from a three-part installment study focusing on the use, policies, and development versus purchasing third-party vendor tools. More than 30 firms participated in the research. The study revealed that “many leaders in the Am Law 100 restricted early use of generative AI to functions that don’t require the input of client-specific information.”
Firm leaders recognized that “training is as important as policies due to generative AI's propensity to hallucinate.” Firms also focus on prompt engineering and being responsible users of genAI, said Taft Stettinius & Hollister, chief client and innovation officer, Lyndsay Capeder. Additional areas of focus include hallucinations. Denton’s Legal AI adoption manager, Sam Chen, noted that the firm “trains its lawyers on prompt guidance as well as recognizing hallucinations and identifying prompt errors.”
Where Are the ABA Model Rules Today?
In July 2024, the ABA issued Formal Opinion 512—its first ethics guidance on a lawyer’s use of AI. The opinion acknowledges the new world we are in and represents a significant first step toward clarity and consistency in how lawyers approach the ethical use of AI. However, more work needs to be done, especially regarding lawyering supervision.
An update to the ABA Model Rules should focus attention on Rule 5.1 (Responsibilities of Partners, Managers, and Supervisory Lawyers), Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance), and Rule 8.4 (g) (Misconduct).
A Law Firm Hypothetical
Imagine, for example, a law firm adopting a genAI tool to streamline legal research and draft documents. Partner A, known for leading high-stakes employment discrimination cases, instructs First Year Associate B to rely on the AI tool to generate initial drafts of a motion for summary judgment. The AI tool pulls from vast amounts of past legal documents. However, due to flawed training data, implicit biases are reflected in its summaries and recommendations.
The tool uses gendered language to draft arguments that subtly downplay women’s claims of workplace harassment. The AI-generated document in a racial discrimination case incorporates disproportionately harsher case law interpretations involving minority plaintiffs.
Neither the Partner nor the First Year Associate noticed these biased outputs. The motions are submitted to court, reflecting these biases, which impact the outcome of the case to the detriment of minority and female clients.
How Do the Rules Apply to the Hypothetical?
- Rule 1 (Competence): The firm lacked competence in understanding AI limitations and addressing bias.
- Rule 5.1 (Responsibilities of Supervisors): The supervising partner must set appropriate guidelines and review the AI’s work.
- Rule 5.3 (Nonlawyer Assistance): The firm did not supervise the AI tool as required, treating it like a nonlawyer assistant.
- Rule 8.4(g) (Misconduct): Using biased AI led to discriminatory outcomes, violating ethical rules against discrimination.
Everyone Must Know How to Use genAI Properly
There are myriad ways in which genAI could still lead us in the wrong direction. The consequences could be anywhere from a misplaced comma, which changes the meaning of a sentence in a contract, to racial or gendered biases in court filings. Without adequate law school education, supervisory training at all levels, and attention to detail, the impact could harm lives or cost millions in litigation. Everyone, including summer associates, must know how to use genAI properly and guard against its downsides.
Which Model Rules Can Be Updated?
Katherine Medianik, managing associate at Sidley Austin LLP, in her article “Artificially Intelligent Lawyers: Updating the Model Rules of Professional Conduct in Accordance with the New Technological Era” (2018), which I relied on for my previous paper on this topic, discussed ways in which the ABA can update its Model Rules to meet the current moment.
Model Rule 5.3 mandates that supervising attorneys oversee the work of nonlawyer assistants. Medianik recommended incorporating “nonlawyer assistant” in Model Rule 1.0’s terminology section. The proposed definition would read: “Nonlawyer assistant’ refers to an individual or artificial intelligence tool, operating under a lawyer's supervision, who is qualified through education, training, or essential programming to carry out substantive legal tasks that necessitate an understanding of legal concepts.”
As she proposed, new language would be added to Model Rule 5.3: A lawyer with direct supervisory responsibility over a nonlawyer assistant, including an AI tool, must supervise, monitor, and examine the nonlawyer’s work before it is finalized. The suggested wording incorporates AI technology into the conventional definition of nonlawyer assistant. It guides supervising attorneys, stressing the importance of managing AI technology as they would human nonlawyer assistants.
Finally, Model Rule 1.1’s Comment 8 (“To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology [. . .]”) is the only direct note on technological competence. The Comment stands alone without a more encompassing approach to ensure lawyers update themselves on evolving legal technology.
As Medianik proposed, the ABA could recommend lawyers attend mandatory CLE training to earn “specialty credits” in technology, following the Florida and Oklahoma models she cites. This would help ensure that lawyers comply with Model Rule 1.1 duties regarding technology.
How Law Schools and the ABA Can Help
We do not need to reinvent the wheel. The Model Rules, as they exist now, can meet our current needs so long as they are tweaked around the edges. Next, law schools would be wise to incorporate the lessons of the ABA Opinion into their curriculum and further define what they mean by “use [of] any AI-generated text” in the classroom and for exam preparation.