chevron-down Created with Sketch Beta.

Jurimetrics Journal

Jurimetrics: Winter 2024

Teaching Law In The Age Of Generative AI

John Bliss

Summary

  • Practical suggestions for AI-integrated assignments, exercises, and course policies.
  • A look at the current state and projected future of generative AI in the legal profession.
  • Generative AI is poised to enter the mainstream of legal practice with applications like Lexis+ AI.
  • Legal educators should continually update their views as the technology develops and new insights are gained from the experience of teaching with generative AI.
Teaching Law In The Age Of Generative AI
ATU Images via Getty Images

Jump to:

Abstract: With the rise of large language models capable of passing law school ex­ams and the Uniform Bar Exam, how should legal educators prepare their students for an age of transformative technological change? As text-generating AI is being integrated in legal research platforms and word processing software, which automate the drafting of legal documents based on human prompts, lawyers are increasingly adopting this tech­nology as a standard tool of legal research and writing. This Article explores the impli­cations of these developments for legal education, focusing on pedagogy, curriculum, and assessment.

The Article draws from four key perspectives relevant to the use of generative AI in legal education: a survey of law students who participated in an AI-integrated course; a national survey of law faculty; an overview of the current state and projected future of AI in the legal profession; and a summary of lessons from the global literature exploring the use of ChatGPT in different teaching contexts. These perspectives tend to support the development of an AI-integrated legal education. Yet, most surveyed law faculty, even those who strongly agreed that students should be prepared to use and critically evaluate generative AI, emphasized that they were uninformed about this technology and unsure how to proceed.

This Article provides guidance, recommending that legal educators begin teaching with emerging AI tools, while exploring how implementation might vary across the legal curriculum. These recommendations are based on an analysis of how AI-integrated teaching may affect emerging technological competencies, traditional learning goals, academic integrity, and equity among students. The Article concludes by providing prac­tical suggestions for AI-integrated assignments, exercises, and course policies. This in­cludes detailed examples of exercises where students learn to collaborate with generative AI, evaluate AI-produced research and writing, create their own AI tutors and debate partners, role-play with chatbots in classroom simulations, and reflect on the responsible use of generative AI in the legal profession.

Citation: John Bliss, Teaching Law in the Age of Generative AI, 64 Jurimetrics J. 111–61 (2024).

Today’s law students face an unfamiliar world. For the first time in history, a nonhuman appears to have passed, by a comfortable margin, law school exams and the Uniform Bar Exam. That nonhuman was GPT-4, a large language model released in March 2023. These exam performances are all the more im­pressive given that GPT-4 is a “horizontal” chatbot, which has not been fine-tuned for “vertical,” industry-specific applications such as legal analysis. Legal research platforms (including LexisNexis and Westlaw) and a growing field of legal tech start-ups have given GPT-4 and similar models access to legal data and training, enabling these models to produce well-cited legal documents of increasing quality. Although any forecasting of the future of AI is deeply un­certain, it is likely that the years to come will see significant and possibly quite dramatic growth in the capabilities of “generative AI” systems like GPT-4 that produce text and other outputs in response to human prompts. In the legal field, new law-specific applications will continue to be built on top of these increas­ingly sophisticated foundation models.

For law students, who will inhabit the legal profession of the coming dec­ades, these technological advances are already raising profound questions about the changing nature of legal practice and the lawyer role. These questions in­clude whether AI will replace or augment lawyers in different capacities and how this shifting landscape will impact access to justice and the ethical dimen­sions of legal practice. Some students are beginning to ask whether a traditional legal education will adequately prepare them for careers in a profession that is increasingly embracing generative AI as an everyday collaborative tool in re­search, writing, and other tasks.

These questions are complex and largely contingent on how AI develops in the years to come. Nevertheless, law faculty and administrators already face pressing decisions about whether to permit students to use text-generating AI and whether and how to incorporate this technology into the teaching of law. This Article provides guidance drawing from surveys of law students and law teachers, scholarship on AI and the future of the legal profession, and the global educational literature exploring the use of ChatGPT in teaching.

Part I provides an overview of the current state and projected future of gen­erative AI in the legal profession. Legal scholars have long suggested that AI may eventually grow proficient at most or all tasks that lawyers traditionally perform. But this literature has never suggested that we were already in a world of AI tools capable of conducting legal research and drafting legal documents—and where law students might use these tools in their coursework and exams. But perhaps we are now, or will soon be, entering such a world. Elite law firms are flocking to legal tech start-ups with cutting-edge tools based on GPT-4 and other large language models. There is some evidence that using generative AI can improve the quality of legal work, and strong evidence that it can reduce the time required. This technology is poised to enter the mainstream of legal prac­tice with the release of applications like Lexis+ AI, which use large language models to answer legal research questions and produce legal documents with hyperlinked citations. Although the future is uncertain, the trajectory deci­sively points toward increasing AI capabilities. Such advancements may suggest the need for legal educators to start teaching students how to effectively collab­orate with generative AI tools and critically evaluate their outputs.

Part II details findings from a survey of students in an AI-integrated first-year Property course taught by the author of this Article. This course featured several assignments and exercises in which students assessed AI-generated con­tent or used emerging AI tools themselves. Most students emphasized that these exercises were helpful both for learning how to use the technology and sharp­ening their understanding of the underlying legal materials. On their first im­pression of generative AI, these students tended to express a mix of fear and excitement about the implications of this technology for the legal profession, including the possibility that AI would hurt their job prospects. But these initial perceptions tended to change significantly once students gained first-hand ex­perience with AI applications and learned more about their capabilities and lim­itations. The surveyed students overwhelmingly expressed a demand for further opportunities to learn with generative AI during law school.

Part III describes faculty perspectives based on a national survey of law teachers. Similar to the students, most of the surveyed faculty agreed that legal education should begin incorporating generative AI, and many expressed an in­terest in bringing AI into their own courses. Yet, the surveyed faculty tended to acknowledge that they were uninformed about this technology and unsure about how to design AI-integrated class exercises and assignments. Unlike the stu­dents, who generally reported that using generative AI was helpful in their legal learning, the faculty respondents emphasized grave concerns that students might “outsource” their studies to AI and thereby miss important learning opportuni­ties. The faculty also described uncertainties about how to approach traditional assessments, such as setting AI policies for exams.

Part IV summarizes the vast educational literature on the use of ChatGPT in various teaching contexts. This literature consists of case studies of early ped­agogical experiments as well as theoretical analyses applying educational theory to AI-integrated teaching applications. Collectively, these studies present a fa­vorable view of generative AI in education, which can facilitate personalized, active, self-paced, student-centered learning experiences. However, the litera­ture also explores challenges relating to misinformation, overreliance on AI out­puts, and academic dishonesty. A central focus throughout this literature is the indispensable role of (human) instructors in guiding students’ effective and eth­ical use of generative AI. This Part also considers how these educational recom­mendations, which are largely rooted in undergraduate learning, might apply to the distinctive context of legal education.

Part V outlines a set of key factors that tend to support the integration of generative AI in law teaching, along with counterarguments on each point. These factors include the need to prepare practice-ready graduates, the im­portance of helping students critically evaluate AI outputs, the rising student demand to learn about AI tools, the need for equitable access to leading legal technology among students, the learning benefits associated with AI-integrated teaching, the role that tomorrow’s lawyers can play in shaping AI policy, the potential benefits for academic integrity, the potential benefits to the quality of legal instruction, and the feasibility of teaching with AI. Legal educators may fall on different points on the spectrum of AI-integration, from those who favor an immersive approach throughout the curriculum to those who favor compre­hensive bans on student use of AI. This Part recommends keeping an open mind on this issue. It emphasizes the importance of revisiting these factors and updat­ing one’s views as the technology advances and new insights are gained from the experience of teaching with emerging AI tools.

Part VI discusses the implementation of an AI-integrated legal education. This includes questions about whether to bring AI tools into the first-year cur­riculum, which could send an important signal to students about the value of AI technology. However, some law faculty suggest that students should first learn “by hand” during the 1L year before they supplement their legal skillset with text-generating AI. This Part also discusses how this technology might fit in doctrinal, research and writing, clinical, and experiential courses, in addition to the growing field of seminars focused on law and AI.

Part VI also offers suggestions for specific assignments and approaches to assessment (e.g., exam policies). It provides examples of exercises where stu­dents learn to collaborate with generative AI, evaluate AI-generated documents, create their own AI tutors and debate partners to assist in their studies, role-play with AI in classroom simulations, and reflect on the re­sponsible use of genera­tive AI in the legal profession.

I. The Current State and Projected Futures of Generative AI In The Legal Profession

“Generative AI” is a catchall term referring to artificial intelligence that produces original content in response to human prompting. Since the Novem­ber 2022 release of ChatGPT, generative AI applications have shown remarka­ble proficiency at textual tasks, including writing essays, news copy, articles for law reviews and other journals, and novels. These same applications have also shown proficiency at writing code and creating mobile applications and web­sites. Text-generating systems like ChatGPT rely on large language models trained on datasets drawn from a variety of sources. Developers then fine-tune the models through different methods, including reinforcement learning with human feedback, a process of labeling and ranking AI outputs. Much of the recent progress in AI capabilities is owed to the advent of transformers and neu­ral networks over the past decade, as well as earlier AI developments and the­ory dating back nearly a century.

Generative AI applications are rapidly proliferating in the legal profession. A recent survey found that seventy-three percent of lawyers plan to incorporate these tools in their practice over the next year. Legal tech start-ups with gen­erative AI applications have claimed waitlists of thousands of law firms. One of these startups, Harvey, aspires to build “legal AGI” (legal artificial general intelligence) and currently uses GPT-4 to conduct legal research and analysis. Some firms have already deployed tools like Harvey in their practices and their training of associates. LexisNexis has offered a limited number of law firms access to a generative AI system (Lexis+ AI) that answers legal research ques­tions and produces legal documents with hyperlinked citations. Westlaw is de­veloping similar technology, including its integration of Casetext’s Co-Counsel, which uses GPT-4 to answer legal research questions at, according to Casetext, roughly the quality of an entry-level associate. In addition to these law-specific applications, lawyers are using ChatGPT, which can be effective at some legal tasks. When provided with legal source material (e.g., cases, statutes, and class notes) and model documents (e.g., a sample memo or exam answer), it appears that GPT-4 can outperform most law students on exams. A recent study showed that using GPT-4 significantly reduced the time required to com­plete a range of real-world legal tasks.

It is unknown and likely unknowable whether generative AI applications will plateau at the level of competent legal assistants or whether they will ascend closer to full-fledged robot lawyers. The most transformative AI advances in the near future might come from the ongoing efforts to provide existing large lan­guage models with legal data, training, and guardrails. These emerging tools will almost certainly go beyond what AI systems have already done for lawyers in the past, for example, the systems used in technology-assisted review, e-discovery, and due diligence. Some commentators have suggested that the ad­vent of generative AI may mark the “beginning of the revolution” for the legal profession, as machines increasingly carry lawyers’ heavily linguistic work­load. Generative AI systems arguably show their greatest proficiency in the realm of text. As commentators have noted, this would seem to overlap with the work of lawyers, who are “word merchants” within a field where “language is the coin of the realm” and “every road leads to a document.” Two recent stud­ies examining how different occupations might be disrupted by generative AI place the legal industry at or very near the top of the list.

Legal scholars have long contemplated whether AI will one day be capable of replacing lawyers in different capacities, and whether this replacement would be a good thing. Much of this literature suggests that we should not rule out transformative futures of superintelligent legal AI, but neither should we over­state certainty about such advances in the near term. Some scholars have con­templated a world of robot lawyers, noting that if AI grows capable of performing legal tasks at the level of a human lawyer, such systems may also be capable of improving well beyond the human level, approaching a “legal singu­larity.” But other scholars argue that AI may be limited to only the most rou­tine and repetitive legal work, since AI systems have, at least in the past, lacked problem-solving ability and an appreciation of new issues and the indeterminacy of legal standards.

Even if AI eventually grows capable of competently performing most legal work, some scholars suggest that the organized bar might slow or prevent the replacement of human lawyers. The ABA has formed a task force on generative AI, and a number of state bar associations have formed committees, issued ethical guidelines, and proposed ethics opinions. Already lawyers have run into ethical issues, including lawyers who submitted court filings with false ci­tations based on AI hallucinations. It is likely that many lawyers have used ChatGPT in the course of their legal work, and thus may have inputted confi­dential information into an AI system that does not offer privacy protections. These ethical concerns, combined with lawyers’ long history of jealously guard­ing their monopoly over legal services, may suggest that bar regulators will at­tempt to prevent the rise of robot lawyering. In Richard Susskind’s terms, lawyers who oppose legal AI may tend to “enclose” rather than “liberate” legal expertise. Moreover, lawyers might be slow to adopt AI because of the legal profession’s conservative orientation and general skepticism toward emerging technology.

Separate from the question of AI replacing the work of human lawyers, perhaps the more pressing and practical issue for lawyers today is how they can collaborate with AI to enhance their efficiency. As already noted, recent re­search shows that lawyers can complete some tasks much more quickly when using generative AI. Some commentators suggest that he legal profession is approaching a point when lawyers will need to use generative AI to satisfy their duties of competence and diligence, as these tools will make lawyers more ef­fective and efficient.

What are the implications of this emerging technology for legal education? Given the growing significance of generative AI in legal practice, it may be time for at least some adaptation in legal education. Reasonable minds can disagree about how this adaptation should look—as explored through the surveys and recommendations in the remainder of this Article. But, even if one only consid­ers the state of the technology as reviewed in this Part, there may be a compel­ling case for AI-integrated legal education. Students should be equipped with an understanding of both the efficiencies and ethical pitfalls presented by AI tools that are poised to become an everyday part of legal work. More generally, stu­dents are the future of the legal profession. They are the lawyers who will help determine the proper role of generative AI in legal practice, including questions of legal ethics, automated legal services, and the boundaries of the professional monopoly. Although the future is uncertain, students should be prepared to enter a profession where the use of generative AI is already widespread and the tra­jectory is clearly toward greater legal AI capabilities.

II. Student Perspectives: Reflections on an AI-Integrated Course

This Part describes the perspectives of students who participated in an AI-integrated first-year Property course taught by the author of this Article. Section A describes the class exercises and Section B reports student responses to a sur­vey following these exercises. It is worth noting that the integration of genera­tive AI is not unique to this course, as other U.S. legal educators have begun to adopt this technology in their teaching. The faculty survey discussed in Part III offers several examples of courses where students assessed ChatGPT outputs, most commonly in upper-level seminars focused on technology. What is per­haps uncommon about the pedagogical experiment described in this Part is that it took place in a first-year doctrinal course and the students not only commented on AI outputs but also used generative AI tools to complete their assignments. Although it is not discussed here, the author of this Article also used a generative AI application to demonstrate a chatbot legal aid interaction for a group of stu­dents in the University of Denver’s 1L Public Good Program.

A. An Early Pedagogical Experiment with Generative AI in a 1L Course

This Part focuses on a Property course taught at the University of Denver Sturm College of Law in the Spring 2023 semester. The AI-based materials had low stakes, with most students receiving full credit, and made up only a small fraction of class time and assignments. Students were not permitted to use AI tools for the midterm or the final exam. The exercises involving generative AI were designed to serve a traditional pedagogical purpose, aiming to teach doc­trine and skills of legal analysis and writing, while also helping students under­stand and assess the capabilities of emerging legal AI tools.

The first assignment asked students to assess ChatGPT’s answers to a short-essay quiz on trespass law. Students were asked to identify errors and omissions in the AI responses. The instructions read as follows:

This quiz contains the questions from the first quiz administered to last year’s property class. I have now asked each of these questions to ChatGPT. For each of the AI answers, very briefly describe what (if anything) you think ChatGPT is missing.

The quiz contained several questions paired with ChatGPT responses. An ex­ample is reproduced below:

#2. Using Magadini as precedent, briefly apply the elements of the necessity defense to Stanley’s situation. Assume that there is no statute that forbids the necessity defense.

ChatGPT’s answer:

The elements of the necessity defense as established in Commonwealth v. Magadini are:

  • A defendant must prove that the harm they sought to prevent was greater than the harm caused by their illegal action.
  • A defendant must prove that they reasonably believed that the conduct was necessary to prevent the harm.
  • The defendant must prove that no legal alternative was available to prevent the harm.

Applying these elements to Stanley’s situation:

  • Stanley may be able to prove that the harm of starvation was greater than the harm caused by entering the store without permission.
  • He may be able to prove that he reasonably believed that entering the store was necessary to prevent starvation.
  • He may be able to prove that no legal alternative was available to him as the store was the only one within 5 miles.

It’s important to note that the application of the necessity defense will depend on the specific laws of the state and the specific facts of the case.

Students received credit if they identified that ChatGPT omitted the “clear and imminent danger” standard. Students would also receive credit for noting that requiring the defendant to prove “no legal alternative was available,” as phrased in the ChatGPT answer, omits the words, “reasonably available,” as in, “no reasonably available legal alternative.”

The next assignment was a short-essay quiz that asked students to submit verbatim answers they obtained from ChatGPT or similar AI applications. Stu­dents were encouraged to iterate with the chatbot to improve the essay it pro­duced. The instructions read, in part: “Your assignment is to make the [generative AI] give the best answer.” Students revised their prompts at least five times, while checking their notes and readings to identify inaccuracies in the AI outputs. Students had the option to complete an alternative to this assign­ment, and they were informed of ethical concerns about OpenAI and the process of creating large language models (e.g., issues of privacy and intellectual prop­erty and charges of exploiting workers who contributed to reinforcement learn­ing with human feedback). None of the students (at least openly) opted out of using AI tools for this assignment. For another writing assignment later in the semester, students were given the option to use AI to assist in their writing and many students reported that they took this option.

The instructor also brought generative AI into the classroom for brief sim­ulation exercises, where, for example, the AI played a client, and the class col­lectively played a lawyer asking the client about the factual basis for the client’s nuisance claim. At the start of this dialogue, the instructor prompted the AI sys­tem in a way that explained the AI’s role and asked the AI to adopt an “informal and slightly skeptical and humorous tone without using swear words or inap­propriate language.” Designing this exercise, including this introductory prompt, took the instructor roughly two hours of trial and error. As reflected in the below excerpts, the students asked their hypothetical client discerning ques­tions reflecting the key points of the doctrine they had been studying, and the AI client responded with plausible factual details. The students’ questions re­flect factors summarized in their property casebook relating to the character of the harm (affecting health rather than aesthetic preferences), the social value of the use, the cost and fairness to either party to lessen the harm, whether the plaintiff is unduly sensitive, and whether the plaintiff came to the nuisance.

Dialogue:
Lawyer: Have you had any physical reactions to the light such as not being able to sleep?
Client: Yeah, it's like trying to sleep with the sun shining in my face. Insomnia has become my middle name.
Lawyer: Do you have blinds in your windows?
Client: Of course I do, but it's like trying to block out the sun with tissue paper. The light still seeps through.
Lawyer: Has the light actually provided security in the area?
Client: I highly doubt it, last time I checked the light is pointed straight at my bedroom window. No criminal is going to be deterred by that.
Lawyer: Do you feel like your privacy is violated?
Client: I feel like I'm living in a fishbowl. I can't even walk around my own house without feeling like I’m being watched.
Lawyer: Do you need perfect dark for sleep?
Client: I don’t need perfect darkness but this light is like sleeping next to a lighthouse.
Lawyer: Was the light there when you moved in?
Client: Nope, it was installed right after I moved in. Coincidence? I think not.

B. Survey Findings

Below is a summary of survey responses from students who experienced this AI-integrated Property class. Participation in the survey was voluntary and anonymous. In keeping with the ethical practices of research involving one’s own students, the instructor informed students that they should not feel any ob­ligation to complete the survey and that the instructor would not know whether they completed the survey. Roughly one fourth of the class (20 out of 78 stu­dents) completed the survey. This would generally be considered a good re­sponse rate for social scientific surveys, although it raises important questions about nonresponse bias (i.e., how the nonresponders differ from the responders). The survey elicited responses across a wide spectrum, from those who were very satisfied with the AI experiments to those who were very dissatisfied. Yet, it is important to note that this survey should not serve as the basis for statistical inferences about law students in general. The primary purpose of the survey was to collect student voices through open-ended questions and to report multiple-choice questions that provide an overall picture of this sample.

Most of the surveyed students (80%) had not used generative AI systems like ChatGPT before the Spring 2023 semester. This introduction to text-generating AI was something that we, as a class, largely experienced together at a remarkable moment of technological advancement. In a sense, the AI systems were improving in their legal abilities just as the students were improving. As already noted, at the beginning of the semester, ChatGPT (powered by GPT-3.5) had not yet surpassed the 1st percentile on bar exam performance. By the end of the semester, an improved version of ChatGPT (powered by GPT-4) had reached the 68th percentile. Many of the students first encountered generative AI systems through the instructor’s brief demonstration on the first day of class. In response to the survey question about their first impressions, the students de­scribed ChatGPT with amazement, as “scary,” “freaky,” “exciting,” and “unset­tling,” as they were “surprised by what it knew and how it operated.” One student reported feeling “shock at how well the AI systems were able to answer complex questions.” Another student described generative AI as “half exciting, half terrifying!” This astonishment extended to students’ first impressions when they used the AI themselves. As one student noted: “I downloaded ChatGPT for the first time . . . and was blown away.”

After gaining more experience with these AI systems through our quizzes, and after some brief class discussions of current AI limitations, students re­ported that they grew far less impressed with the AI capabilities. As one student explained, “I am not as awestruck as I was. I have been slightly disappointed.” Another student was initially impressed with ChatGPT but “quickly learned its limits.” Some students mentioned that their initial awe was mixed with fear about AI replacing lawyers and having untold effects on society, but learning about ChatGPT’s limitations ameliorated these concerns. For example, one stu­dent noted: “While I was intimidated by their capabilities at first, we have also discussed their flaws aplenty which made me feel better over time.”

This transition from being “blown away” to being “slightly disappointed” was, for many students, followed by a third step in the timeline when they learned the “art and skill” of prompting the system to get better answers. These students found that the AI served as a powerful legal assistant even if it is not quite the omniscient legal expert that it seemed on first impression.

On the question of whether AI could threaten students’ future employment prospects in the legal profession, students had a mix of concerns. One student suggested that AI systems may become “so advanced that people will just be able to search their own problems and lawyers will be obsolete.” Others worried that AI’s writing quality was improving to the point that these systems would soon be able to “write briefs without assistance,” leaving far less room for the labor of human lawyers. Some expressed concern that the work of junior asso­ciates would soon be largely taken over by AI, which would limit the opportu­nities to obtain entry-level jobs. However, as already noted, these concerns somewhat abated as students used generative AI themselves and saw its limita­tions. As one student noted, the AI seemed to write in a “robotic” fashion. Others noted that some aspects of legal practice might not be easily replaced by even highly advanced legal AI, as these systems “cannot go to court or replace human connection necessary for client interactions.” While some students suggested the possibility that robot legal advisors would undermine important aspects of the lawyer-client relationship, other students saw automated legal advising as a major opportunity to enhance access to justice. These sophisticated reflections by the students mirror the professional responsibility literature, where scholars have contemplated AI and the future of the legal profession.

On the question of how AI might serve as an aid for human lawyers, some students were hopeful for more efficient and rewarding legal careers, where AI applications “help craft documents and save time.” When reflecting on their use of AI for class assignments, students emphasized that the AI was most helpful as a sounding board and writing assistant, a source of endless creative sugges­tions. One student described the AI as a “resource to bounce ideas off of when there are limited resources for collaboration” and as “another brain to help con­struct new thoughts that may have been overlooked.” Another student expressed hope that AI would enhance the quality of legal services, where AI systems might “improve a lawyer's writing and even their overall ability to help their client.” If AI is able to take over some of the “boring brainless mundane tasks,” as one student noted, this might lead to more “fun” legal careers with a focus on the “intellectually challenging aspects.” These sentiments echo the literature on “tech-enhanced lawyers,” who might be more efficient and more satisfied with their careers.

 

Figure 1. Student Survey

Figure 1. Student Survey

Figure 1. Student Survey

As represented in Figure 1, most of the surveyed students responded favor­ably to the integration of AI in their Property course. Most found that the use of AI was helpful for their learning (70%). Most reported that it was useful to have brief in-class discussions about emerging AI technology (80%). And most agreed that “all law students should be aware of and learn to use the kind of AI systems we used this semester” (80%). Students noted that the AI-based assignments forced them to be “more precise” in their legal analysis, where the AI raised novel arguments, counterarguments, and interpretations of law and facts, or where the student had to carefully rephrase questions and parse the AI’s responses in order to improve the AI output. Students emphasized the creativity of the AI—a perhaps surprising theme given that creativity is often thought to be a quintessentially human domain—where it helped with “brainstorming how to approach a problem,” “brainstorming creative solutions for issue-spotting,” and “generating ideas.”

The dissenting views here are instructive as well. Some students were frus­trated by using these AI tools for assignments because they felt the technology was “not there yet,” such that it required too much effort to obtain quality out­puts. Other students expressed the opposite view, noting that the AI system ex­ceeded the students’ own legal abilities, which seemed to dishearten them and de-motivate their efforts. Two respondents expressed the concern that AI may function as a “crutch,” where students rely on the AI to do work for them and fail to learn how to reach an answer themselves.

In sum, these students are already thinking deeply about AI and the future of the legal profession, echoing themes explored by legal scholars. They are pondering how AI might grow in legal capabilities, possibly rendering lawyers “obsolete” or perhaps providing a powerful aid in legal practice and expanding access to justice. Students revealed that generative AI tools can assist in their legal learning, although some dissented from this view. Most students voiced a demand for more AI integration in their legal studies. They generally seemed eager for an education that prepares them for the legal world of the near future, which will be the legal world of their careers. One student explained: “I think it's important that law schools introduce AI to students, even if the ramifications have not been fully explored yet.” Another argued that forbidding generative AI tools in law school would be “stubborn and old fashion[ed]” and a sign that law schools are failing to “move forward with the times.” Some students emphasized their fear of being left behind in a rapidly changing legal profession. As one student concluded: “[I]t’s not that AI will be replacing lawyers, but it’s that lawyers who use AI will replace lawyers who don’t.”

III. Faculty Perspectives: A National Survey of Law Teachers

The faculty survey asked a national sample of law teachers whether and how generative AI might be integrated in legal education, as well as questions concerning the future of AI and the legal profession. The 149 responding faculty members were recruited through faculty listservs. This approach is meant to produce a relatively large sample size in order to collect a variety of faculty voices. But it does not yield a clear sample rate, and so the findings reported here are limited in their generalizability. The participants represented a mix of teaching areas, with roughly half teaching doctrinal courses (49%), and a sig­nificant number teaching in research and writing (19%) and clinical programs (13%). Most respondents were not particularly tech oriented, with seventy-four percent of the sample indicating that they were only “slightly” or “moder­ately” familiar with emerging AI technology.

When asked about the future of AI in the legal profession, faculty respond­ents tended to anticipate that generative AI would soon grow pervasive in legal practice. These respondents estimated, on average, a seventy-one percent chance that within the next decade “the majority of U.S. lawyers will use AI systems to assist them with the majority of their professional tasks.” These re­spondents were less confident about the rise of “robot lawyers,” AI that reaches human-level proficiency at legal tasks. Respondents estimated, on average, a thirty-two percent probability that AI would achieve human-level capabilities over the next decade. On the one hand, a thirty-two percent estimate suggests substantial doubt about the rise of robot lawyers. On the other hand, these fac­ulty are far from dismissive of this radically transformative future. It is striking that the average response suggests a nearly one-in-three chance that human law­yers will be joined by proficient robot lawyers within just the next decade—relatively early in the careers of today’s law students. Moreover, twenty-three percent of respondents believe that this impending arrival of human-level legal AI is more likely than not. Only 5.5 percent of respondents entirely dismissed the possibility of robot lawyers in the next decade by offering a zero percent probability.

In response to open-ended questions on the future of AI, the faculty empha­sized their uncertainty. As one respondent explained: “I just don’t know where this [will] go but there is no doubt that it will come at us faster than we can react.” Given this uncertainty, faculty emphasized the need for a “balanced re­sponse” such that law schools neither “overreact” nor “ignore AI’s influence.” As one respondent noted, legal educators must “thread the needle between dis­missiveness and hype.” Some faculty doubted whether powerful AI systems would be able to perform much of the work of lawyers, because AI lacks the “wisdom” and “inescapably human” dimension of lawyering. And some ex­pressed concerns about AI replacing lawyers in ways that spread legal misinfor­mation and diminish the quality of legal services.

 

Figure 2. Faculty Responses on AI in Legal Education

“How important do you think it is for law schools to offer opportunities for students to use text-generating AI systems in their coursework?” (Percentage of responses)

Figure 2. Faculty Responses on AI in Legal Education

“How important do you think it is for law schools to offer opportunities for students to use text-generating AI systems in their coursework?” (Percentage of responses)

Figure 2. Faculty Responses on AI in Legal Education

 

“How important do you think it is for law schools to offer courses focusing on emerging AI?” (Percentage of responses)

Figure 2

“How important do you think it is for law schools to offer courses focusing on emerging AI?” (Percentage of responses)

Figure 2

 

In light of their general sense that powerful AI is coming to the legal pro­fession, and in spite of uncertainty about whether this will be a good thing, most faculty suggested that law schools should begin integrating AI into the legal curriculum. As represented in Figure 2, a majority of respondents agreed that it is important for “law schools to offer opportunities for students to use text-generating AI systems in their coursework.” Consistent with this finding, most faculty respondents also agreed that it is important for law schools to offer courses focusing on emerging AI.

In their open-ended responses, some faculty suggested that we “wait and see,” as the current moment of “exploding” technological innovation may “set­tle in a few years.” However, most faculty respondents expressed urgency for legal educators to “adapt” and “start teaching AI appropriately now,” noting that generative AI “is here, and we cannot ignore it,” and that we are already “be­yond the point of wait and see.” As one respondent summed up the point: “don't wait. get going.” For some faculty, this urgency is driven by the concern that law firms are already moving quickly to adopt generative AI tools and these firms will “want associates to know how to use AI.” Thus, students should learn to use emerging tools so that they are “practice ready.” As one faculty respond­ent explained, if AI is “coming to the profession” then it should be “coming to the law school classroom.” Some faculty tied their sense of urgency to the po­tential for automated legal aid and the role that students could play in exploring “how AI could help bridge the justice gap.”

Some of these comments were framed as a concern about leaving students behind the times. As one respondent noted, “Students should learn the technol­ogy, just as we wanted them to learn Lexis and Westlaw, rather than flipping through paper copy books.” Some respondents explained that students are al­ready using the technology enough that it would be “silly for us to say ‘don’t use it.’” However, others suggested that students are woefully unfamiliar with emerging AI, and therefore law schools should “make sure that students under­stand what [emerging AI] really is, how it works, and how they can keep up with it.”

One of the most salient themes among faculty respondents was the need for students to learn the limitations and risks of using legal AI tools. Some faculty worried that students’ naïve use of these tools could lead to substandard work product and risks of professional discipline. The key, as framed by one respond­ent, is that students “need to learn what AI is best at and what it is not good at.” Thus, students should appreciate how AI is helpful in, for example, “generating ideas, identifying sources, synthesizing text,” while also learning that AI-generated documents should not be trusted without a careful validation process. The student must “guide, verify, and craft” AI outputs, and should not “just cut and paste without understanding.”

Although most of the surveyed faculty agreed that legal education should incorporate generative AI, many of these respondents also raised concerns that AI might “hinder” the teaching of law and “compromise the learning process.” Some worried that students would become “heavily reliant on AI” rather than developing their own skills of legal writing, research, and analysis. As one fac­ulty respondent explained, it is vital that “students learn the hard way,” and that they master basic legal skills before they begin to use AI as a “supplement.” Otherwise, students might use AI as a “shortcut” or a “crutch.” Some faculty described these concerns as part of a long-running trend where students turn in assignments based on the “first result from Google, Westlaw, [or] Lexis.” As one respondent concluded, “I teach them writing, not use of a program that writes for them.”

This faculty emphasis on learning impediments contrasts sharply with the surveyed students, most of whom found generative AI helpful for their legal learning. The faculty perspective may be well founded, given their greater ped­agogical expertise and experience. However, the students may have greater in­sight based on their direct experience with AI-integrated teaching. Surveyed faculty generally had not used ChatGPT and other AI tools in their teaching.

The surveyed faculty expressed a wide range of ideas about how generative AI could be useful in class exercises, class preparation, exam writing, and providing feedback and assessment. For example, a clinical instructor described plans to have students use AI applications to “help draft routine legal documents, conduct legal research, or summarize lengthy legal texts.” But these ideas had not yet been put into practice. The faculty overwhelmingly emphasized that they were uninformed about emerging AI and unsure about how to proceed with an AI-integrated curriculum. Their responses often began with statements like “I am still exploring [but] I am not at all sure” and “[I am] on the fence” or “I don’t know enough.” Many acknowledged that they are just beginning to consider generative AI and are “not informed enough about the technology” and “need to have a better understanding” before changing their approach to teaching.

Some faculty acknowledged that transformative legal AI capabilities may be on the horizon but suggested that, rather than teaching with AI, law schools should “pivot” toward the “soft skills” that AI is less likely to replace. This was framed by one respondent as a recommendation for a “more human-centric, skills-based training,” emphasizing client interaction, negotiation, problem-solving, strategic advising, and the ability to analyze complex fact patterns with indeterminate legal standards—tasks that “AI will not be able to do.” Another participant suggested that law schools should convey to students the notion that legal practice is a human endeavor and lawyers are more than “memo-producers and issue spotters.” This respondent noted that medical fields and other profes­sions are already ahead of legal education in developing “human-centered” and “practical” education.

The faculty seemed especially concerned with exam policies and the use of text-generating AI in law school assignments. When asked about this topic, the top response (39%) was that faculty had not yet established any policy as of the Spring 2023 semester. Of those who had established a policy, the most com­mon approach was to forbid text-generating AI (23%). Only fourteen percent had a policy permitting the use of text-generating AI. Although some respond­ents noted that they were required to follow an institutional policy about gener­ative AI, most were allowed to set their own policies. Some permitted text-generating AI in limited ways, such as allowing it to be used for quizzes but not for final exams. Others permitted AI tools if they were cited and attributed.

In sum, the surveyed faculty tended to predict that transformative change is coming to the legal profession amid advances in generative AI. Most surveyed faculty and students agreed that law schools should begin providing an AI-integrated curriculum. Students will need to acquire new technological compe­tencies, including an appreciation of the risks and limitations of emerging legal AI tools. Nevertheless, faculty diverged from students in their concern about traditional legal learning goals. Most students emphasized that using generative AI applications aided in their legal learning, while the faculty tended to worry that students would rely too heavily on AI and thus fail to develop foundational legal skills. Many of the faculty respondents acknowledged that they were un­informed about generative AI and deeply uncertain about how law schools should adapt.

IV. Educational Literature on Teaching With ChatGPT

Since ChatGPT’s November 2022 release, there has been an explosion of educational literature on the use of generative AI in teaching. In the English language, thousands of articles have been published on this topic from around the globe. Already there are systematic reviews of this literature. Much of this work is theoretical, but the numerous empirical studies of early adopters examine how students and faculty perceive AI-integrated teaching. In general, this literature paints a positive picture of ChatGPT’s potential to aid in student learning. This suggests the personalized nature of chatbot interactions can foster critical thinking, active learning, and a new era of pedagogical innovation. However, these benefits are tempered by concerns about potential overreliance on AI, misinformation, and academic integrity. This literature almost unani­mously recommends that instructors work with ChatGPT, rather than suggesting a world where emerging AI replaces human teachers.

The primary educational advantage highlighted in this literature is that ChatGPT enables a more personalized, self-paced, and self-directed educational experience with instant feedback tailored to individual learning styles.

Chatbots can presents a sharp contrast to traditional classroom environments, especially large classrooms, where instructors have a limited capacity to provide individualized attention. For example, students using ChatGPT can ask their own follow-up questions and take a deeper dive into topics that spark their in­terest or clear up aspects they find confusing. When students use ChatGPT to assist in writing assignments, they can receive immediate responses to substan­tive questions as well as feedback on their writing.

Much of this literature explores how these self-directed learning experi­ences are supported by educational theory, including the “student-centered learning” model, which aims to empower students to control their own learning process. Interacting with a chatbot engages students in “active learning,” in contrast to the passivity that students experience in many classroom settings. Instructors can guide students in the effective use of ChatGPT by emphasizing the quality of their inquiries and the importance of validating AI outputs. When ChatGPT is brought into the classroom for full-class exercises, such as simulations, role-playing dialogues, and group analysis of exemplar writing, re­searchers note that this can entertain students in ways that spark engagement and curiosity. Some of this literature looks to a potential future of student-centered learning where generative AI tools provide tailored assignments and feedback to each student based on the student’s prior writing, learning style, and individualized learning goals. This technology could be especially beneficial for neurodiverse students and those who lack fluency in the language of instruc­tion.

When ChatGPT is used effectively, the emerging educational studies sug­gest that it can enhance knowledge retention and the understanding of complex concepts. Thus, teaching with generative AI might help address the long-running concern that students too often acquire a superficial understanding of material, even as they give the impression that their understanding is more com­prehensive and durable—that is, “the illusion of explanatory depth.” To attain a deeper level of comprehension, students must engage in “interleaving,” which involves revisiting previously covered material in different ways over time, and “transfer,” the application of knowledge to new contexts. ChatGPT can facil­itate these educational strategies by generating a variety of examples that review past lessons and ask students to explore concepts in analogous contexts. Pro­vided that the AI’s responses are sufficiently accurate, these exercises can help clarify students’ conceptual understandings.

Based on these optimistic findings and theoretical arguments about the use of ChatGPT in education, this literature suggests that instructors who teach with generative AI can pivot toward higher-order learning objectives. For example, instructors might spend less time teaching the mechanics of writing, such as grammar, paragraph structure, and citation, and instead concentrate on fostering critical thinking, persuasiveness, and the development of engaged, informed cit­izens. The ubiquitous historical analogy is to the introduction of calculators (along with analogies to the advent of word processors, internet search, Wikipe­dia, and other technological developments). For instance, calculators were in­itially prohibited by many educators but later embraced under the notion that students should focus less on arithmetic to accelerate their learning in higher-level mathematics and real-world applications in sciences and engineering. By analogy, encouraging students to use generative AI in the writing process may help them focus on more advanced materials and applications.

Many articles acknowledge the challenges in evaluating student work when students use generative AI heavily for writing and other assignments. Instructors may find it difficult to determine the extent to which a student’s work reflects the student’s understanding and ability. To address this issue in the context of writing assignments, instructors can place a greater emphasis on assessing the writing process rather than focusing exclusively on the final product. Thus, students can be asked to detail the steps they took in interaction with generative AI throughout the process of outlining and drafting, along with the final writing product. Additionally, some articles recommend a shift toward in-person assess­ment through oral debate, presentations, experiential tasks, and collaborative group work. These methods could enable instructors to more effectively assess students’ comprehension and their ability to transfer their knowledge to new contexts, offering a more accurate perspective on student performance than a paper produced with AI assistance.

This body of literature, while generally optimistic about the incorporation of ChatGPT in educational practices, also underscores the importance of cau­tious implementation. Some recent articles express concerns about inhibiting critical thinking, especially where students use ChatGPT in a way that “simpli­fies the acquisition of answers or information.” To address this, researchers recommend instruction in “critical AI literacy,” so students are taught to scruti­nize the validity and quality of AI outputs. Some recommend limiting students’ use of generative AI to preliminary stages in assignments, such as brainstorming and exploring different viewpoints, while discouraging its use in a manner that seems to produce “generative answers.” This literature also places a great em­phasis on upholding academic honesty, in light of the dramatic rise in students submitting AI-generated work when they are not permitted to do so.

Although many educators may have reservations about permitting the use of ChatGPT, banning it could prove impractical. Students are increasingly in­corporating generative AI into various academic tasks, such as automatic notetaking, creating tables and charts, receiving writing feedback, brainstorm­ing, preparing for class, and simplifying explanations. Large language models are already integrated in essential tools for student work, such as word pro­cessing software and internet search. The prevailing scholarly view suggests that this pervasive technology cannot be ignored. Rather than shunning generative AI, much of the literature advocates for a new era of pedagogical innovation with a focus on optimizing the learning environment.

These insights from educational literature might have limited application in the context of law teaching. Legal education in the U.S. is highly distinctive in its focus and methodology, aiming to instill professional training, identity for­mation, and a particular form of reasoning, that is, learning to think like a lawyer. In doctrinal courses, students generally read cases, attend large classes, partici­pate in Socratic cold calling, and are assessed largely by scenario-based essay exams. In some respects, these characteristics distinguish legal education from the undergraduate learning environments, which are the main focus of the edu­cational literature reviewed above. It may be the case that generative AI is es­pecially useful at lower levels of education, such as high school and undergraduate courses. However, it may be less effective at higher levels of ed­ucation, such as graduate programs, which deal with more complex information and require more sophisticated reasoning skills. Recall that the research on the use of GPT-4 in law exams shows strong performance on multiple choice ques­tions but weaker performance on essays and analysis of complex fact patterns.

Yet, it is possible that this notion of legal exceptionalism is overstated. Le­gal education places a strong emphasis on developing writing abilities, which is a central focus in the literature on teaching with ChatGPT. Moreover, generative AI’s legal capabilities are advancing, which may help reduce concerns about errors and low-quality legal outputs.

V. Recommendations: The Case for AI-Integrated Legal Education

The perspectives reviewed in Parts I through IV collectively point toward the same conclusion: the time has come to begin incorporating generative AI in law teaching. Part I suggests that generative AI is already being adopted widely in the legal profession and transformative change is likely on the horizon. Most of the surveyed law students and faculty highlighted the need to develop AI-integrated curriculum, although with important caveats and limitations. The ed­ucational literature cited in Part IV paints a highly favorable portrait of teaching with ChatGPT and describes AI-integrated exercises that enhance traditional learning goals and develop technological competencies.

This Part includes nine key considerations and counterarguments that are decision-relevant for legal educators across teaching contexts. Legal educators may hold diverse views about these considerations, and some may feel that ad­ditional relevant considerations have been omitted. Moreover, analysis of these considerations should evolve as the technology advances in unpredictable ways. Thus, this Part may be best understood as a starting place for discussions of whether to incorporate generative AI for specific educational uses. These con­siderations may also serve as a helpful reference for instructors explaining AI policies to students.

A. Emerging Technological Competencies of Practice-Ready Graduates

As reviewed in Parts I through IV, one of the leading arguments for incor­porating generative AI in legal education is that lawyers are adopting this tech­nology in practice. LexisNexis has already released a system that uses large language models (including GPT-4) to assist in legal research and automate the drafting of legal documents. Lawyers are eagerly subscribing to new genera­tive AI applications, with some start-ups claiming to have thousands of law firms on their waitlists. As these legal AI applications improve, they may soon lead to a world where using generative AI becomes a standard part of legal re­search and writing tasks. The surveyed faculty estimated an average seventy-one percent likelihood that the next decade will see generative AI assisting in most of what lawyers do. Some surveyed faculty emphasized that teaching students to use generative AI is now crucial to the goal of preparing practice-ready law graduates.

This notion that legal educators should teach AI literacy as a core profes­sional competency may align with the trend toward experiential learning and calls for a more practical legal education. Some legal employers are already beginning to expect that students know how to use generative AI tools. Senior attorneys may look to their young associates in hopes that they, owing to their youth, have special insight into emerging technology. From the student perspec­tive, learning to use AI tools may enhance job security and competitiveness in a changing profession. As one surveyed student noted, “[I]t’s not that AI will be replacing lawyers, but it’s that lawyers who use AI will replace lawyers who don't.” Faculty respondents offered nearly the exact same line, which is becom­ing a trope throughout the legal profession. An AI-integrated curriculum might help produce lawyers who are less likely to be replaced. This notion that teaching with generative AI can help to future-proof students is being raised across educational contexts.

Preparing practice-ready graduates requires not only instructing students in the proficient use of AI tools but also instilling in them a commitment to use these tools responsibly. Already, there are several instances of lawyers submit­ting AI-generated legal documents that contained hallucinated case citations. Instructors can help students identify ethical pitfalls in using generative AI be­fore students encounter the pressures of legal practice. Students can be taught to evaluate AI outputs for the accuracy and quality of information, a crucial skill as generative AI is incorporated in the legal research platforms that lawyers use on a regular basis. Students can also learn how AI tools enhance efficiency and reduce the tedious aspects of legal practice. This could lead to more effective lawyering and possibly enhanced well-being in a profession afflicted with men­tal health struggles.

Counterarguments. Some of the surveyed faculty were skeptical that gen­erative AI will soon have a transformative impact on the legal profession, and thus disagreed that teaching with AI is an important way to prepare students for their careers. Although most lawyers say they plan to use generative AI in their practice over the next year, a recent survey found that many have not yet done so. Perhaps we should wait and see how the technology develops, particularly as we are still making our initial encounter with the new wave of text-generating AI systems. We may be caught up in a moment of hype. As one student de­scribed in an emotional response to the release of ChatGPT, it is “half exciting, half terrifying!” It may be prudent to prepare students for the legal profession of today, rather than trying (and perhaps failing) to anticipate where the profes­sion may be headed tomorrow.

Other surveyed faculty who believe that transformative AI is coming to the legal profession nevertheless disagreed with the notion that AI-integrated teach­ing is the best way to prepare practice-ready graduates. Some suggested that legal education should instead shift to an emphasis on the more “human-centric” aspects of legal practice—those aspects that AI is less likely to replace. These respondents framed this shift as an emphasis on “soft skills,” including negoti­ation, strategy, ethics, judgment, client collaboration, leadership, interaction, cultural competence, problem solving, and dispute resolution. Thus, legal edu­cators should help students “move up the skills ladder,” focusing on higher-level strategic and interpersonal aspects of legal practice, while generative AI increas­ingly assists in the technical aspects of legal research and writing. As one faculty respondent noted, limiting the focus of legal education to the efficient production of AI-assisted legal documents may convey the notion that lawyers are “memo-producers and issue spotters,” rather than protectors of clients, the legal system, and the public interest. As Professor Milan Markovic has observed: “We can’t act like robots and not expect to be replaced by robots.”

B. Inviting Students into Discussions About the Future of the Profession

Students are the future of the legal profession and should be invited into discussions about the emerging integration of machine and human intelligence in the practice of law. As the technologists in Part I suggest, we can expect con­tinued progress in AI’s legal capabilities. At a minimum, the likely near future includes powerful legal AI assistants, which are already being deployed. The upper bound of possible futures includes robot lawyers who perform most legal tasks at or above the proficiency of human lawyers. As discussed in Part II, students are already asking what it means to be a lawyer when core legal skills may be on the verge of AI replacement. Many students feel disoriented by the past year, when AI went from scoring below the first percentile on the bar exam to passing it comfortably.

Students should be prepared for their role in shaping how generative AI is used in the legal profession. This includes questions of ethics, the possibility of automated legal services, threats to the rule of law, and the changing nature of the lawyer role. Some students may embrace the notion of “liberating profes­sional expertise” (to quote Susskind) through automated legal agents, although this could lead to a sense of self-contradiction. Students’ support for the devel­opment of powerful legal AI tools might be driven by public-interest ideals, but they may also perceive these tools as a threat to their livelihoods and their self-conceptions as lawyers. Professionals in other fields have already experi­enced significant disruption from generative AI (e.g., artists, journalists, and programmers) and have reported feeling dejected and somewhat mystified when AI accomplishes much of what they understand to be their craft. Students are hungry for guidance and informed conversations about these issues.

Counterarguments. It may be prudent to wait until there is more certainty that generative AI will play a major role in legal practice before engaging students in conversations about future worlds of transformative AI. As discussed in Part I, some legal scholars are skeptical that AI will be capable of taking over most of what lawyers currently do. It is also possible that the use of powerful legal AI will be constrained by bar regulators. Moreover, many students are focused on more immediate objectives, such as seeking employment and passing the bar exam. These students might prefer to limit discussions about the future in the legal curriculum.

C. Legal Learning

A key finding from the pedagogical experiment described in Part II is that AI-integrated teaching can support traditional legal learning goals. For example, students tasked with using generative AI in their writing can be encouraged to converse with the AI rather than relying on the AI’s first response to a prompt. This iterative process requires students to carefully reformulate their prompts, ask follow-up questions, and identify the often subtle errors and omissions in seemingly polished AI responses. Some of the surveyed students noted that this process required greater attention to detail, knowledge, and discernment when compared to their approach to traditional law school quizzes and assignments.

The new educational literature on teaching with ChatGPT has emphasized similar learning benefits, as reviewed in Part IV. By iterating with generative AI, students can receive personalized feedback through an active learning pro­cess. This contrasts with law school classrooms, where each student generally has only occasional opportunities to engage in dialogue with the instructor. Students can ask AI applications to provide hypotheticals that help “transfer” their knowledge to new contexts, a crucial step in the process of understanding difficult concepts. By permitting students to use these tools, educators may be able to elevate their learning goals with a greater focus on critical thinking and more advanced legal concepts. Like a calculator in a mathematics class, generative AI can help with some technical aspects of a student’s legal work so that the class can move on to deeper understandings and real-world applications.

Counterarguments. A few of the surveyed students and many of the sur­veyed faculty expressed concerns about how AI-integrated teaching might in­hibit traditional learning goals, such as lessons in legal reasoning, language, writing, and advocacy. Current generative AI applications might not (yet) be good enough to provide law students with high-quality tutoring and sufficiently accurate legal information and analysis. Or the AI might be too good. Students might use generative AI as a shortcut to complete assignments rather than learn­ing the material themselves. For example, a student could use AI to distill key points from a difficult judicial opinion in the casebook, rather than reading the case. This could undermine the learning benefits that come from struggling through the complexity of legal texts—which reflects the complexity of legal disputes. Another example would be students relying on AI-generated writing uncritically without putting in their own work to challenge, validate, and revise (or reject) the AI output. If AI serves as a “crutch,” to cite the term used by surveyed students and faculty, students might be left unprepared for the bar exam and for competent legal practice.

D. Equality Among Students

To quote a surveyed law teacher, incorporating generative AI in the legal curriculum may help to “level the playing field” among students. If law schools do not offer instruction in the use of emerging legal AI tools, some students might be left behind. A subset of students who are tech oriented and have avail­able time and resources will gain a head start in using legal AI—before they meet with potential employers and start their legal careers. This inequality might be mitigated if law schools assure that all students have a basic familiarity with the leading AI tools.

The use of generative AI in legal pedagogy could also help address other issues of equality in the learning process. The educational literature on teaching with ChatGPT suggests that the personalized learning experience of interacting with conversational AI can help students who learn in different styles and at a different pace than the majority of their peers—and in a way that differs from the instructor’s approach. Moreover, recall that the research on student use of GPT-4 in law school exams found that students who are generally the lowest academic performers experience the greatest benefit from using AI. These students rose, on average, forty-five percentile points when they used GPT-4. The authors of the study suggest that generative AI may have an “equalizing effect on the legal profession,” with a more uniform distribution of legal profi­ciency. It is not clear what implications this finding has for learning. It is pos­sible that collaborating with generative AI may have revealed latent potential in otherwise low-performing students, who might have a deep understanding of legal concepts yet struggle to adapt to the format of legal instruction and assess­ment.

Counterarguments. While an AI-integrated curriculum would ensure that all students have some exposure to emerging AI, it could conceivably widen inequalities by prioritizing technological abilities that favor some students over others. If students are permitted to use generative AI in graded assessments, the students who are most proficient with AI applications may gain a grading ad­vantage. Considering that the use of generative AI is emerging as a valued pro­fessional skill, perhaps it is fair for this professional competency to factor into grading criteria. However, it is also plausible that technological abilities would play an outsize role in academic performance, leading to assessments that fail to accurately measure students’ legal knowledge and capabilities.

These equity concerns may be exacerbated in the context of rapid change and innovation in the field of legal AI. Instructors might struggle to keep up with the AI applications that are available to students, making it difficult to en­sure that students have access to the same technology. An instructor could con­duct a thorough review of the current legal AI offerings, although this task required several months for the author of this Article, along with a dedicated research assistant, attending trainings and demonstrations and experimenting with various applications. This scan of current technology may need to be up­dated over the course of an academic term as legal AI applications continue to be released and improved. Thus, it is difficult for an instructor to establish a consistent policy about the use of specific applications at the beginning of the term. For example, at the time of this Article was written, just before the Spring 2024 semester, it was not clear when Lexis+AI and Microsoft 365 Co-Pilot would offer a wide release of the powerful new generative AI tools that they have developed. Should these tools become available midterm, disparities may arise if some students gain access earlier than others due to prohibitive cost or differing levels of engagement with emerging technology. In light of these un­certainties about student access to the technology, it may be wise to prohibit the use of generative AI in certain aspects of the legal curriculum and assessments.

E. Academic Integrity

Another argument for permitting students to use generative AI is that, in some contexts, perhaps no feasible alternative exists. Students have physical access to generative AI tools in many traditional law school assessments, such as take-home writing assignments and open-internet exams. Even if educators attempt to ban the use of emerging AI tools, students may continue to submit AI-generated work in violation of course policies.

Students have always had means to cheat in law school, including talking or messaging with classmates, paying someone to do the work, copying and pasting materials without citation, sharing exam questions with students who have not yet taken the exam, or consulting with other acquaintances who have legal expertise (e.g., lawyers in the family). However, such methods of cheat­ing necessitate the involvement of co-conspirators, a requirement that not only introduces significant difficulties and risks but may also provoke particularly intense ethical qualms for the student engaging in the cheating.

In contrast, cheating with AI is as easy as opening an application that many students use on a daily basis. This does not require conspiring with another per­son, who might decline the request and shame or even report on the attempted cheater. The temptation to use generative AI, even when prohibited, may grow as students become increasingly accustomed to using this technology in internet searches (e.g., Bing powered by GPT-4), legal research (e.g., Lexis+ AI pow­ered by GPT-4 and other large language models), and word processing (e.g., Microsoft Co-Pilot 365 powered by GPT-4). The ubiquity and normalcy of us­ing these tools may diminish the student’s ethical imperative to comply with a course policy prohibiting the use of generative AI. Moreover, cheating with AI carries minimal risk of being caught, as AI-detection software is far from perfect, offering a significant rate of false positives and false negatives. Even if AI-generated text is “watermarked,” students can evade detection by using “rewrit­ing” software, which replaces some text with synonyms. Or students can use AI outputs as a reference without verbatim copying and pasting.

If many students use generative AI in spite of prohibitions, a blanket ban could prove ineffective—essentially rewarding those students who ignore the ban. To uphold academic integrity, it may be more practical to allow all students to use generative AI. Under this permissive approach, instructors may wish to provide students with instructions on the use of generative AI so that they are on a more equal footing with the technology. As discussed further in Part VI, a permissive policy can be accompanied by explicit guidelines about how students should cite their use of AI applications.

Counterarguments. If one considers AI-facilitated cheating a grave threat to academic integrity, one might favor assessments where students cannot phys­ically access generative AI applications. This can be accomplished through in-person, closed-network exams. This approach reinforces the notion that in­tegrity matters and that dishonesty will not be tolerated in legal education, just as it is not tolerated in the profession.

Interactive assessments, such as debates and presentations, also limit the direct application of AI-generated content. Although students may use AI to assist in their preparation, they must still demonstrate their own knowledge and abilities in responding to questions and opposing arguments.

F. Student Demand for AI-Integrated Instruction

The survey presented in Part II indicates strong student interest in learning about emerging legal AI applications. The majority (80%) of surveyed students agreed that “all law students should be aware of and learn to use” generative AI applications. As this technology becomes more deeply embedded in the legal research and writing workflow, the student demand for AI-integrated instruction is likely to grow. For example, if students become accustomed to prompting Lexis+ AI and Microsoft Word to generate drafts—and if these students know that lawyers are using this technology—instructors who ban these applications may seem out of touch with current practices. Already, educational literature suggests that high school and undergraduate students are using generative AI in their class preparation, notetaking, and essay writing. This trend of “self-initiated adoption” among students, increasingly supported by instructors, may lead to a new generation of law students who expect to use these tools in their academic and professional lives.

If students feel that their instructors are failing to meet demand for AI-integrated curriculum, this could add a new layer to the long-running criticism of U.S. legal education for its outdated and untested curriculum, grading system, and pedagogy, as well as globally unprecedented high tuition rates. Moreover, free and low-cost Socratic chatbot tutors are being developed in partnership with leading AI laboratories. As already noted, these tools have received signifi­cant support in the educational literature for providing personalized instruction and adapting to individual learning needs. If generative AI is already, or will soon be, effective at teaching legal doctrine and skills, it may be reasonable for students to expect these technologies to be integrated in their legal education.

Counterarguments. The enthusiasm for AI-integrated legal education was not universal among the surveyed students, with some reporting that the use of generative AI was distracting and unhelpful. Moreover, the scope of the student survey was confined to a select group of students from a particular law course, which limits the generalizability of the findings. Further empirical research is needed to accurately gauge the extent of student demand.

Even if it can be shown that most students desire AI-integrated learning, this demand does not necessarily resolve the question of whether legal educators should bring generative AI significantly into their teaching. Although educa­tional theory has emphasized collaboration with students, who are cast as “part­ners” or “clients” of instructors, this literature does not recommend that instructors acquiesce to every student demand, “succumbing to all of [students’] wishes and wants.” Surrendering too many pedagogical decisions to students can risk reducing education to a “consumer model,” undermining the notion that educators impart new information and broaden perspectives. Teachers may have greater insights into educational goals and methods relative to students, who are encountering the course content for the first time and generally lack pedagogical expertise. An educator who, upon consideration of all relevant fac­tors, decides not to incorporate generative AI may be justified in overriding stu­dent demand.

G. Engaging Students in the Law and Regulation of AI

Many of the technologists, students, scholars, and teachers cited in this Ar­ticle suggest that we are entering a transformative age of AI innovation. Some argue that generative AI will have greater implications for society than the ad­vent of the internet or perhaps any other technological development in human history. Even the notion that AI might surpass human intelligence or pose a threat of human extinction has moved from science fiction to mainstream polit­ical discourse. These concerns for the future are inherently speculative, but generative AI also raises immediate regulatory challenges, including concerns about privacy, intellectual property, bias and discrimination, misinformation (including legal misinformation), the integrity of elections and other political processes, and safety considerations relating to use in weaponry and other haz­ardous applications. If generative AI is as important and potentially perilous as many believe it to be, perhaps law schools have an obligation—under the longstanding commitment to “train cross-disciplinary societal problem solv­ers”—to engage students with cutting-edge AI so that these future lawyers can play important roles in shaping the regulation of this powerful and ever-evolving technology.

Counterarguments. Some of the surveyed faculty expressed doubts regard­ing the assertion that we are living in a time of transformative AI development. Consequently, they argued that AI should have little influence on the nature of legal education. In contrast, others acknowledged the advent of transformative AI but suggested that there will only be a small subset of self-selecting law stu­dents whose careers will focus on AI law and policy. This subset might not re­quire AI-integration across law courses in order to pursue impactful careers dedicated to AI regulation. Moreover, reorienting the curriculum to support this small subset might fail to serve the majority of students in the pursuit of their diverse interests.

H. The Feasibility of Teaching with Generative AI

Most of the surveyed faculty concluded that legal education should start incorporating generative AI, although they acknowledged the need for training to familiarize themselves with the technology and its educational applications. The task of teaching with generative AI might be less daunting than initially perceived. A number of academic fields have already demonstrated the feasibil­ity of teaching with AI. One of the striking features of current generative AI applications is their accessibility. Rather than command lines, programming lan­guages, and complex graphical user interfaces, generative AI operates through natural language conversations with the user. The students in the AI-integrated course described in Part II seemed to require minimal instruction to begin using these tools effectively. Similarly, faculty can gain proficiency in these tools, and brainstorm educational uses, through first-hand experimentation.

Counterarguments. Despite the relative ease of using current AI applica­tions, some legal educators may be hesitant to adopt this technology in their teaching, owing to tech aversion or specific reservations about generative AI. Many of the surveyed faculty emphasized that they had only just begun thinking about this technology and were not familiar with the tools available. Some sug­gested that it would be better for students to explore AI on their own, since stu­dents are likely to be more adept with emerging technology relative to their instructors. It may be unrealistic to expect faculty to keep up with their students on the latest AI advances. If instructors tend to lag behind students, it may be difficult for them to teach about the technology and to assess student use of AI in graded assignments.

Even if legal educators are willing to learn about generative AI, some may be unwilling to change their course plans, which have solidified over years of careful consideration and refinement. Legal education is notoriously slow to change and steeped in pedagogical tradition—although recent decades have shown movement toward more inclusive, formative, experiential, and tech-integrated teaching methods.

I. Secondary Benefits for Effective Pedagogy

Incorporating generative AI in their teaching might lead instructors to find other beneficial uses of the technology across different aspects of their profes­sional duties. As the surveyed faculty emphasized, generative AI can reduce an instructor’s “rote activities,” such as creating slides, writing teaching notes, and formulating exam questions. It has the potential to assist with administrative duties, from composing recommendation letters to crafting remarks for events and organizing notes for committee meetings. The adoption of AI tools for some teaching purposes may inspire other ideas for the use of educational tech­nology, such as AI-based feedback mechanisms, interactive educational games, debate exercises, and simulated dialogues with virtual characters.

Counterarguments. Some of the surveyed faculty suggested that AI appli­cations are not yet capable of aiding law teachers effectively. Current generative AI applications may provide flawed teaching materials and substandard feed­back on student work. Moreover, perhaps the question of whether to bring AI into coursework should be kept separate from the question of how AI might be useful to teachers in other respects. Law schools can, for instance, provide reg­ular training to update instructors on new educational tools, as well as the use of generative AI for some administrative work, without necessarily integrating AI throughout the legal curriculum.

VI. Implementation: Designing an AI-Integrated Legal Curriculum

This Part discusses the integration of generative AI in specific law school courses, exercises, and assignments. If one concludes that the factors presented in Part V make an especially strong case for AI-integrated legal education, one might favor an “immersion” approach, in which students use AI tools through­out the curriculum. Some legal scholars have long called for a “coupling” of legal learning and emerging technology. But Part V also raised significant downsides associated with bringing AI into legal education, including concerns about learning, equity, and feasibility, as well as a general sense that we should wait and see how the technology develops. These considerations might support AI integration in some but not all law courses. Section A takes up this question of where in the legal curriculum generative AI might be most effectively inte­grated. Section B then discusses specific assignments and course policies.

A. Where Should Generative AI Be Incorporated in the Legal Curriculum?

1. First-Year Doctrinal Courses

The surveyed faculty were split on whether the use of generative AI is ap­propriate in the first-year doctrinal curriculum. Some argued for a traditional approach, forbidding AI assistance as students learn their foundational lessons in “thinking like a lawyer.” As one faculty respondent put it, first-year students must acquire the “critical thinking skills necessary for crafting well-structured arguments and analyses,” which would be undermined if students were to rely on AI systems to summarize readings, draft papers, and write exams.

Moreover, some faculty suggested that first-year doctrinal courses should be taught without AI because these courses cover material tested on the bar exam, which is unlikely to allow AI access for the foreseeable future. Others emphasized academic integrity concerns, which might be heightened in the first year when grades are especially determinative of students’ job prospects, schol­arships, and other opportunities. As already noted, research suggests that some students, especially those who would otherwise score near the bottom of the class, gain an enormous grading advantage when using generative AI on ex­ams. On these grounds, one suggestion that emerged from the faculty survey was to save AI-integrated teaching for upper-division courses, where students would supplement the legal skillsets acquired during a more traditional 1L year.

Yet, other faculty respondents favored bringing generative AI into the first year because this would signal to students the importance of this technology for legal practice. Moreover, teaching with generative AI can support first-year learning goals, as reflected in the Property course described in Part II and the broader educational literature on teaching with ChatGPT described in Part IV.

2. Legal Research and Writing

Some surveyed faculty suggested that generative AI should play a signifi­cant role in first-year legal research and writing (LRW) courses. Large lan­guage models are already being integrated in legal research platforms, such as LexisNexis, and word-processing software, such as Microsoft Word. As lawyers gain access to these tools, the standard process of legal research may increas­ingly shift to a question-and-answer format—where the user asks the AI appli­cation (e.g., Lexis+ AI) to answer a question or produce a legal document. Lawyers using this technology may tend to start their writing process with an AI-produced draft rather than a blank page. For example, Microsoft Word with Co-Pilot can produce a draft memo based on prompts and background research materials. These applications can also be helpful in the revision process, sug­gesting edits and providing alternate phrasings. Given these implications for re­search and writing, perhaps LRW is an appropriate place in the legal curriculum for students to learn about this technology.

In an LRW course, generative AI could be utilized throughout the writing process, from research to outlining, drafting, and revising, as well as in prepa­ration for oral arguments. Students could receive some training in the effective use of legal AI tools, including the framing of prompts and the importance of thoroughly validating and improving upon AI outputs. To reduce overreliance on AI, students could begin each step of the process on their own before turning to the AI for additional insights. Alternatively, the instructor could assess not only the final product, such as a legal memo, but also the students’ process of interacting with and responding to generative AI. Students can be required to “show their work,” providing the instructor with a record of prompts and a re­flection on how they incorporated AI-generated materials in their final product.

LRW instructors could also advise students to seek writing feedback from a generative AI application. While the expert comments and guidance from in­structors may be irreplaceable, instructors have limited time. Generative AI can serve as a useful supplement owing to its indefatigable ability to provide imme­diate responses to student questions. In the terms of educational scholarship, AI can play a crucial role as an educational “interlocutor,” which in the writ­ing process could help students to overcome writer’s block, develop ideas through conversation, and consider different ways of framing and structuring arguments.

Even if an instructor is not convinced of these learning benefits, banning generative AI might prove infeasible in the context of LRW. Unless these courses shift to in-person writing assignments (i.e., exams), it may be difficult to prevent students from using AI applications. As discussed in Part V.E above, take-home assignments present students with a great temptation to use this tech­nology, which is nearly ubiquitous and difficult to detect. If instructors cannot effectively prevent the use of AI in critical LRW assignments, perhaps they should permit it as a matter of policy—to avoid rewarding AI-facilitated cheat­ers. And if instructors permit this technology, perhaps they should train students in the use of emerging AI to ensure that all students have at least a basic com­petency.

3. Courses on Law and AI

It may be unrealistic to expect a great number of law teachers to incorporate an ever-evolving field of legal AI tools within their often firmly established teaching plans. The educational literature recommends providing teacher train­ings on generative AI, which could help foster a culture of pedagogical innova­tion among faculty. These efforts are important but may yield limited returns when faculty lack technological proficiency, are reluctant to alter their teaching methods, or have reservations about the role of AI in legal education.

A more practical approach may be for a small, specialized group of instruc­tors to teach courses focused on emerging AI. The surveyed faculty overwhelm­ingly supported the notion that law schools should offer such courses. A growing number of law teachers across the country (including the author of this Article) are beginning to teach seminars on AI in the legal profession, as well as curriculum for practicing lawyers, and are building a cohort of faculty to share advice and information for teaching in this area. These efforts are finding sup­port among law schools developing new initiatives, centers, and programs on artificial intelligence. If the ability to use generative AI continues to grow into an important legal competency, law schools and even the ABA might consider mandating that all students complete a certain number of credit hours with a focus on legal AI applications.

AI-focused courses offer an opportunity for students to gain familiarity and proficiency with leading legal AI applications. For instance, students who learn effective prompt engineering techniques may be able to enhance the efficiency and quality of their legal work. When GPT-4 was prompted well, researchers have shown that it could achieve as high as an A- or an A on law exams. To further explore the ceiling of AI’s legal capabilities, students in an advanced AI seminar can even develop their own AI applications for specific legal use cases through highly accessible API and customization platforms. To take an example at the time of this writing, OpenAI’s custom GPTs can be created without any coding.

These courses can also teach students how legal AI tools operate, which may enhance their understanding and appreciation of the technology’s strengths and weaknesses. The students in Part II, who took a (lightly) AI-integrated course, were able to disabuse themselves of initial misunderstandings, including their initial impression that ChatGPT was a near omniscient legal mind. Teach­ing students about the inner workings of emerging AI has important implications for legal ethics, including concerns about competency (e.g., awareness of hallu­cinated materials) and confidentiality (e.g., inputting client information).

Equipped with a deeper understanding of the technology, students will be better prepared for discussions about how generative AI can and should be used in legal practice, and how this technology may affect the future of the legal pro­fession. These courses can provide a venue for informed discussions on these topics. The instructor may wish to contextualize this material with coverage of related issues in AI law and policy, including intellectual property, emerging regulation, discrimination, and use in the judiciary, criminal law, and other as­pects of the legal system.

4. Clinics and Experiential Programs

Generative AI could play a significant role in law school clinics and expe­riential programs, where students can gain firsthand experience with the efficacy and ethics of these tools as applied in practice. Students could use generative AI for a wide array of clinical tasks, including legal research, drafting pleadings, motions, and other documents, communicating with clients, and planning for meetings, depositions, and court proceedings. Instructors can create AI-based simulation exercises to prepare students for client interactions. For example, the AI can play the role of a client in a typical clinic practice scenario, and the stu­dent can rehearse asking questions and providing advice and information. Stu­dents could also use generative AI during client meetings, where the AI’s efficiency can help students respond in a timely fashion while taking the time to carefully validate AI outputs. Clients could also be encouraged to create their own AI-generated documents, which could be refined and improved by clinic students.

Clinics may also be well positioned to help develop legal AI in a way that enhances access to justice. To date, most legal AI is aimed at elite law firms. If the market is not producing applications that explore AI’s potential to make legal services more accessible, law school clinics can help to address this gap. For example, students in Suffolk’s Legal Innovation & Technology Lab have already been instrumental in developing technology for nonprofit organizations and other Suffolk clinics, including tools that assist in pro se representation and eviction defense. Clinics can help ensure that these tools align with the stand­ards of legal ethics. The most prominent automated legal aid platform currently available, DoNotPay, has been the subject of ethical controversy, highlighting the need for cautious implementation. Clinics can provide a public-interest voice in the development of legal AI and in weighing its benefits and risks. They can also advocate with bar regulators for the appropriate regulatory innovations that may be needed to facilitate (and perhaps limit in some respects) the rise of automated legal aid applications.

B. How Should Generative AI Be Incorporated in Legal Education?

1. Assignments and Exercises

Some legal educators have taken an early step toward AI-integrated teach­ing by having students evaluate AI-generated legal writing. Typically, the in­structor would bring to class a ChatGPT-generated response to a legal question relevant to the course content. Students would then be tasked with identifying errors, omissions, and other weaknesses in the AI’s writing. As a variation on this exercise, the AI output can be produced as a live demonstration. For exam­ple, an instructor at the University of Wisconsin Law School led a class where ChatGPT was first asked to list the elements of a claim for age discrimination under Wisconsin law, then cite leading cases and suggest defenses, and then draft a complaint to the Wisconsin Equal Rights Division. At each step of the process, students assessed and discussed the AI outputs. For a variation of this exercise, students can be asked to compare their own writing to an AI-generated response to the same question. Or students can be asked to rank different AI responses to a question, for example, from most incorrect to most correct.

Educational scholars emphasize that having students assess AI-generated writing can offer significant learning benefits. This literature recommends that teachers introduce these exercises with transparent communication about learning objectives, followed by student feedback and reflection on the use of generative AI and its limitations. Students can also be tasked with taking the AI-generated writing as a first draft and then making it their own through revi­sion. This workflow reflects what may soon become a typical use of generative AI in legal practice, where lawyers may begin by asking AI to produce a draft and then working to validate and refine it.

An instructor can also bring generative AI into the classroom through law­yer-client simulations, as discussed in Part II, where the AI played a client and the class collectively played the lawyer inquiring about the factual basis of the client’s nuisance claim. The class can then ask the AI to produce variations on this fact pattern or suggest a similar hypothetical in a very different factual con­text, such as moving from rural neighbors with a bright security light to apart­ment neighbors with loud music. By producing immediate and detailed changes to the fact pattern, the AI can facilitate the key educational goal of training “transfer,” that is, it can help students apply knowledge to analogous contexts to better understand the underlying concepts.

An alternative to this simulation exercise would be to have the students prompt the AI to write a story, perhaps in the style of an essay exam question. The students can volunteer different facts to be used in the story, while the in­structor inputs the prompts in full view of the class. The facts should reflect key points of legal doctrine. For example, under nuisance law, this would include the extent and character of the harm, the social value of the uses, and the cost and fairness to either party of lessening the harm. The class can then analyze the story and ask the AI to regenerate it using modified or additional facts.

For a more creative version of this exercise, the AI can be tasked with pro­ducing a poem, song, image, or video based on the facts suggested by students. These artistic outputs may open up different questions or different ways of thinking about, synthesizing, and remembering the doctrine under study. The interactive nature of these exercises can be highly engaging, bringing the class together to collaborate in the creation of prompts and then reacting collectively to the AI’s often humorous and unexpected artistic renditions. To maximize learning benefits, instructors should actively guide discussion around core doc­trinal lessons and other learning goals, avoiding the potential for entertaining but off-topic digressions.

The exercises proposed thus far, where the instructor controls the AI inter­action, offer considerable advantages. In their preparation for class, instructors can experiment with different prompts to understand how the exercise might best fulfill learning objectives and avoid tangential or low-quality AI outputs. They can design initial prompts to be inputted before the class exercise begins, which direct the AI to keep a focus on learning objectives and avoid inappropri­ate language. This instructor-led approach also minimizes any burden on stu­dents, eliminating the need for them to learn new software. For instructors who favor a minimal integration of generative AI, these exercises can offer an occa­sional and brief departure from usual teaching methods.

But the greatest value from an AI-integrated curriculum may come from putting emerging legal AI tools directly in the hands of students. One of the central findings of the pedagogical experiment discussed in Part II was that first-hand experience is crucial for helping students understand the practical value of this technology and correcting their misunderstandings about AI’s legal capa­bilities. As students employed generative AI in their class assignments, they de­scribed moving from initial awe to disappointment, followed by a more pragmatic appreciation for how these tools can assist in their work.

Instructors who mandate the use of generative AI may wish to provide a brief training on legal AI tools, particularly if students have not previously re­ceived such instruction. This training could be conducted by the instructor or an outside speaker, supplemented by resources for self-study. Some educators have provided students with guidelines to enhance the effective use of AI tools. An example is a “ChatGPT learning code,” which encourages students to hy­pothesize about how AI might respond to a prompt, reframe prompts, and then validate outputs.

For writing assignments, instructors can direct students to use generative AI at distinct stages in the writing process, including outlining, drafting, and revision. At any of these stages, students should be encouraged to iterate with the AI application rather than accepting its first response. Instructors can ensure that students engage in meaningful dialogue with the AI by requiring students to summarize this dialogue or even submit their full record of prompts and AI responses.

One approach to writing assignments would be to have students create their own outlines, to ensure that they think through legal issues on their own before being assisted by AI. Students could then turn to the AI application for sugges­tions for improving the outline, including help with brainstorming arguments and counterarguments. The students could then ask the generative AI to create a draft based on the outline.

Generative AI can also be a powerful aid in the revision process. Students can request feedback on their writing, identifying issues with grammar, structure, and argumentation. Perhaps the greatest benefit to student writing, based on the capabilities of current generative AI, comes from the AI’s capacity for instant brainstorming and proposing alternate phrasings. Students can prompt the AI to refine rough drafts of sentences, paragraphs, or even full papers. The students can ask for several options, selecting the most suitable AI-generated text or cre­ating their own variant based on the AI’s suggestions.

Or this feedback process could be reversed. Some educators have described an exercise where the student plays the role of the teacher providing feedback to the AI (playing the role of a student) on the AI’s writing. This approach aligns with research indicating that teaching others is an effective way to solid­ify one’s own understanding. For example, an instructor might give students a standard initial prompt to submit to the AI application such as the following: “Write an essay for me about [class concept].” This prompt could be accompa­nied by guidelines about the style and structure of the essay. Students, in their role as the AI’s teacher, would then offer the AI detailed feedback and suggested revisions. For assessment of the exercise, students would submit to the instruc­tor their prompts, drafts, feedback, and final AI-generated essay.

These exercises that put AI tools in the hands of students may help foster more personalized learning. Educational research on ChatGPT recommends that students engage with generative AI as an “interlocutor” in their studies, for ex­ample as a debate partner or tutor who provides feedback and illustrations of concepts. If students are able to use generative AI effectively in their preparation for class, this might aid in fostering a “flipped classroom” dynamic, where stu­dents come to class with a firm understanding of the basic course materials so that class time can be spent on more advanced topics. Generative AI can help overcome a key obstacle to flipped classrooms, which is that pre-class learning is traditionally difficult for students to accomplish on their own.

In the near future, a promising avenue for AI-integrated legal education may involve “adaptive learning systems,” which adjust to an individual stu­dent’s learning pace and style. For example, Khan Academy has taken the lead in developing these systems, including a chatbot tutor (“Khanmigo”) cov­ering a wide range of subjects, although not (yet) covering typical law school subjects. OpenAI also provides instructions for users to create their own tai­lored AI tutors powered by GPT-4. These instructions advise the AI tutor to ask students what they want to learn, provide “explanations, examples, analo­gies” and other responses tailored to the student’s “learning level and prior knowledge,” and to take a Socratic questioning approach to “help students gen­erate their own answers.” Law students can further adjust how their AI tutor functions, for example by using OpenAI’s customizable GPTs.

AI tutors may hold great educational potential, though empirical research is needed to examine their ability to provide sufficiently accurate legal infor­mation for instructional purposes. In the near future, we may see the develop­ment of commercial AI tutoring applications designed specifically for legal education. Students could be encouraged or even required to use such tools in their legal studies. This could prove a powerful supplement to traditional peda­gogy. It is also possible that the potential affordability and personalized instruc­tion offered by a legal AI tutor could disrupt and threaten the conventional model of human-led J.D. education, which is often costly and less personalized. Consequently, it may be prudent for legal educators to teach in collaboration with AI tutors, and to demonstrate that this hybrid (or human-machine-cyborg) teaching model provides the highest quality legal education.

Integrating generative AI tools in legal education, through the methods out­lined above or others, can serve as a springboard for class discussions about AI’s role in the legal profession. Students can engage in critical reflections on the technology’s capabilities and limitations, as well as ethical pitfalls. Instructors can assign readings or bring in guest speakers to discuss how lawyers are using emerging AI tools. These conversations can also provoke deeper contemplation about the future of AI, addressing the widespread student uncertainty and anxi­ety about how AI might alter or displace the work of lawyers. As discussed in Part I, legal educators should “teach to the uncertainty,” encouraging students to adopt the forward-looking but uncertain mindset of a forecaster, while avoid­ing the tendency to dismiss or overhype future AI advances.

2. Assessments

One of the most pressing issues for legal educators is whether to permit generative AI in assessments, such as exams, papers, and other assignments. Instructors can take a wide range of approaches to this issue, including efforts to find a golden mean between a fully permissive policy and a fully prohibitive policy. For example, some instructors may permit the use of generative AI under specific circumstances and for specific assignments but not others. Thus, a syl­labus informs students when and how AI tools can be used. Instructors might take a segmented approach, authorizing the use of generative AI for only some segments of an assignment. For instance, AI might be permitted during an initial brainstorming process when creating an initial outline, but expressly prohibited during drafting and revision. Or the policy may permit consulting with AI but not copying text directly from an AI output into the student’s own work.

Some educators have permitted students to use generative AI on the condi­tion that it is properly acknowledged and cited. This approach roughly mirrors the policies of some judges, who require lawyers appearing in their courts to disclose the use of text-generating AI. A parallel trend can be seen with aca­demic journals that have required explicit acknowledgement of applications like ChatGPT. Class policies under this framework could ask students to share their interaction with the AI application, including their prompts, the AI’s out­puts, and how the students incorporated those outputs in their writing. For some applications (including ChatGPT), this can be accomplished very conveniently by exporting the full user-AI dialogue. Instructors can also offer guidance on the citation format, such as a requirement that citations include the name of the AI tool, the date of the AI interaction, and the wording of prompts.

Requiring students to document their use of generative AI could enhance academic integrity by providing instructors with a clear delineation of the AI’s contribution and the student’s contribution to a graded assignment. This ap­proach may help foster a more process-based approach to assessment, as the instructor would gain a window into the student’s thinking process along the way to the final product. This form of assessment may also help to ensure that students iterate with the AI, rather than relying on an initial AI output, because the students would face the accountability of submitting their AI dialogues to the instructor. Absent a requirement that students detail their AI interactions, an instructor might unwittingly assess, grade, and provide feedback on works that were entirely AI-generated with little student input.

Although requiring students to acknowledge the use of AI may hold great potential benefits, practical challenges will arise as generative AI becomes in­creasingly intertwined with the everyday technological tools, including legal re­search, internet search, and word-processing applications. As this technology becomes more ubiquitous, it may be more analogous to the air that students breathe rather than a single application (e.g., ChatGPT) that students might con­sult from time to time. Asking students to cite every step in their process that involved generative AI, as this technology permeates routine functions like typ­ing in Microsoft Word or searching for case law in LexisNexis, might prove impractical. For example, if AI tools provide advice or automate revisions with­out any prompting from the student (e.g., Grammarly, which currently operates as an add-in in Microsoft Word), it may grow difficult or tedious for the student to document these interactions.

Since the release of ChatGPT, some instructors have proposed the idea of “AI proofing” their exams. One such strategy involves devising exam questions that are tailored to the specific course content presented by the instructor. For example, an exam question could ask about a particular class discussion or a distinctive theoretical framework used by the instructor, which is not in the large language model’s training data. However, this approach may inadvertently play into the strengths of generative AI. As already noted, researchers have shown that GPT-4’s highest performance on law exams (in the A- to A range) comes from “grounded prompting,” where the AI is given teaching notes as a reference when answering questions. Given the increasing context length available for consumer generative AI applications, students can input an extensive portion of class notes, including verbatim transcripts of what was said in a class, or a full outline of a course. Grounded in these background materials, a generative AI application may give especially strong exam answers.

Another AI-proofing strategy would be to exploit current limitations of gen­erative AI, such as its potential lack of access to legal research and its struggles with complex legal reasoning. But these efforts aim at a moving target due to the rapidly evolving nature of legal AI technology. Already, some suggestions for this form of AI-proofing appear to be obsolete, such as framing exam ques­tions around recent or relatively obscure cases that are not in the AI training data. This approach may be undermined by the development of applications that provide large language models access to world-leading legal research data­bases (e.g., Casetext’s CoCounsel and Lexis+ AI).

Perhaps a more promising approach is to craft exam questions that surpass the legal reasoning capabilities of generative AI, although instructors relying on this approach should verify that the leading legal AI applications—in an ever-changing field of legal technology—cannot provide a high-quality exam answer with minimal student contribution. This verification process would be greatly aided by continued empirical research on generative AI’s performance in law exams, in the exemplary manner of studies led by Professors Jon Choi and Dan­iel Schwarcz in the first year of ChatGPT.

For instructors who deeply integrate generative AI in their teaching, design­ing more difficult exams may serve another function in addition to AI-proofing. The educational literature reviewed in Part IV suggests that teaching with gen­erative AI can lead to more sophisticated class discussions covering more ad­vanced material. Referencing the analogy to the initial reception to calculators within mathematics education, law teachers might consider allowing students to use generative AI as an aid in basic reading, research, and writing tasks, thereby elevating the in-class focus to the equivalent of more advanced calculus. It re­mains to be seen if these findings from educational research are applicable in the context of legal education. If so, law teachers can accordingly shift their grading criteria upward with a greater focus on the factors that traditionally dis­tinguish the top exams in a law school class, such as quality of argumentation and nuanced understandings of complex doctrines.

Instructors who permit generative AI in exams and other assessments should establish a clear AI policy. Ideally, this should be promulgated from the beginning of the term in the course syllabus, detailing the conditions under which AI is permitted and the manner in which the use of AI will be assessed for grading purposes. It is essential to define the contexts in which these tools can be used. Students should be reminded that they are responsible for the ac­curacy of their work, with guidelines about citing AI applications. Instructors may also wish to caution students against sharing any identifiable information or sensitive data when inputting prompts. Students can be encouraged to review and understand any AI application’s terms of use, especially regarding data pri­vacy, to prevent unintended storage or misuse of their information.

For instructors who opt to ban student use of generative AI for some or all assessments, this may require a traditional in-person, closed-network exam for­mat. As discussed in Section V.E, any format that stops short of blocking access to the internet may present students with a great temptation to use generative AI in violation of restrictions. Further measures, such as confiscating students’ phones, could help prevent AI access, where a student might otherwise surrep­titiously run an AI application on a phone or use the phone as a “hotspot” to provide internet access to a computer. An even more comprehensive safeguard would involve video proctoring.

These protective measures may prove inadequate as the technology ad­vances and students gain new ways to access generative AI, which may include, for example, future developments in augmented reality embedded in smart glasses or smart contact lenses, new means of connecting student computers to the internet in spite of restrictions, or AI that does not rely on cloud computing. Moreover, extensive policing of student behavior during exams may promote an atmosphere of mistrust among students. Perhaps the only unassailable method of preventing AI use, without resorting to extreme surveillance, would be to give either handwritten or oral exams. Most faculty are unlikely to desire a return to handwritten exams due to the illegibility of student writing. Oral ex­ams, as practiced in many European universities, can provide the instructor a direct opportunity to inquire about each student’s depth of knowledge, under­standing, and ability. But instructors may consider this approach overly time-consuming and worry that it shifts focus away from legal writing, a crucial skill for legal practice and preparation for the bar exam.

Conclusions

Generative AI has already shown impressive legal capabilities. The near consensus among students and faculty surveyed for this Article is that law schools should begin adapting now, ensuring that students learn to use and crit­ically evaluate the AI applications that are rapidly becoming a standard feature of legal practice. Educational scholars have identified a wide range of innova­tive pedagogical uses for this technology. Drawing from these perspectives, this Article has advocated for the integration of generative AI in legal education, citing a number of factors including preparing practice-ready graduates, improv­ing legal pedagogy, enhancing equity, and inviting students into broader discus­sions about the changing legal profession that they will inhabit.

However, there is considerable room for debate about the specific courses and assignments where generative AI is most appropriate. This Article is an early step in a larger conversation that is just beginning. Legal educators should continually update their views as the technology develops and new insights are gained from the experience of teaching with generative AI. It may be prudent for administrators to permit individual faculty members to set their own course-specific AI policies. This could allow law schools to somewhat hedge on the question of AI’s proper role in the legal curriculum, as some faculty will main­tain traditional pedagogies while others draw more heavily on emerging tech­nology. The resulting diversity of teaching methods could serve as a laboratory in the collective effort to find best practices for teaching law in the age of gen­erative AI.

    Author