IV. Educational Literature on Teaching With ChatGPT
Since ChatGPT’s November 2022 release, there has been an explosion of educational literature on the use of generative AI in teaching. In the English language, thousands of articles have been published on this topic from around the globe. Already there are systematic reviews of this literature. Much of this work is theoretical, but the numerous empirical studies of early adopters examine how students and faculty perceive AI-integrated teaching. In general, this literature paints a positive picture of ChatGPT’s potential to aid in student learning. This suggests the personalized nature of chatbot interactions can foster critical thinking, active learning, and a new era of pedagogical innovation. However, these benefits are tempered by concerns about potential overreliance on AI, misinformation, and academic integrity. This literature almost unanimously recommends that instructors work with ChatGPT, rather than suggesting a world where emerging AI replaces human teachers.
The primary educational advantage highlighted in this literature is that ChatGPT enables a more personalized, self-paced, and self-directed educational experience with instant feedback tailored to individual learning styles.
Chatbots can presents a sharp contrast to traditional classroom environments, especially large classrooms, where instructors have a limited capacity to provide individualized attention. For example, students using ChatGPT can ask their own follow-up questions and take a deeper dive into topics that spark their interest or clear up aspects they find confusing. When students use ChatGPT to assist in writing assignments, they can receive immediate responses to substantive questions as well as feedback on their writing.
Much of this literature explores how these self-directed learning experiences are supported by educational theory, including the “student-centered learning” model, which aims to empower students to control their own learning process. Interacting with a chatbot engages students in “active learning,” in contrast to the passivity that students experience in many classroom settings. Instructors can guide students in the effective use of ChatGPT by emphasizing the quality of their inquiries and the importance of validating AI outputs. When ChatGPT is brought into the classroom for full-class exercises, such as simulations, role-playing dialogues, and group analysis of exemplar writing, researchers note that this can entertain students in ways that spark engagement and curiosity. Some of this literature looks to a potential future of student-centered learning where generative AI tools provide tailored assignments and feedback to each student based on the student’s prior writing, learning style, and individualized learning goals. This technology could be especially beneficial for neurodiverse students and those who lack fluency in the language of instruction.
When ChatGPT is used effectively, the emerging educational studies suggest that it can enhance knowledge retention and the understanding of complex concepts. Thus, teaching with generative AI might help address the long-running concern that students too often acquire a superficial understanding of material, even as they give the impression that their understanding is more comprehensive and durable—that is, “the illusion of explanatory depth.” To attain a deeper level of comprehension, students must engage in “interleaving,” which involves revisiting previously covered material in different ways over time, and “transfer,” the application of knowledge to new contexts. ChatGPT can facilitate these educational strategies by generating a variety of examples that review past lessons and ask students to explore concepts in analogous contexts. Provided that the AI’s responses are sufficiently accurate, these exercises can help clarify students’ conceptual understandings.
Based on these optimistic findings and theoretical arguments about the use of ChatGPT in education, this literature suggests that instructors who teach with generative AI can pivot toward higher-order learning objectives. For example, instructors might spend less time teaching the mechanics of writing, such as grammar, paragraph structure, and citation, and instead concentrate on fostering critical thinking, persuasiveness, and the development of engaged, informed citizens. The ubiquitous historical analogy is to the introduction of calculators (along with analogies to the advent of word processors, internet search, Wikipedia, and other technological developments). For instance, calculators were initially prohibited by many educators but later embraced under the notion that students should focus less on arithmetic to accelerate their learning in higher-level mathematics and real-world applications in sciences and engineering. By analogy, encouraging students to use generative AI in the writing process may help them focus on more advanced materials and applications.
Many articles acknowledge the challenges in evaluating student work when students use generative AI heavily for writing and other assignments. Instructors may find it difficult to determine the extent to which a student’s work reflects the student’s understanding and ability. To address this issue in the context of writing assignments, instructors can place a greater emphasis on assessing the writing process rather than focusing exclusively on the final product. Thus, students can be asked to detail the steps they took in interaction with generative AI throughout the process of outlining and drafting, along with the final writing product. Additionally, some articles recommend a shift toward in-person assessment through oral debate, presentations, experiential tasks, and collaborative group work. These methods could enable instructors to more effectively assess students’ comprehension and their ability to transfer their knowledge to new contexts, offering a more accurate perspective on student performance than a paper produced with AI assistance.
This body of literature, while generally optimistic about the incorporation of ChatGPT in educational practices, also underscores the importance of cautious implementation. Some recent articles express concerns about inhibiting critical thinking, especially where students use ChatGPT in a way that “simplifies the acquisition of answers or information.” To address this, researchers recommend instruction in “critical AI literacy,” so students are taught to scrutinize the validity and quality of AI outputs. Some recommend limiting students’ use of generative AI to preliminary stages in assignments, such as brainstorming and exploring different viewpoints, while discouraging its use in a manner that seems to produce “generative answers.” This literature also places a great emphasis on upholding academic honesty, in light of the dramatic rise in students submitting AI-generated work when they are not permitted to do so.
Although many educators may have reservations about permitting the use of ChatGPT, banning it could prove impractical. Students are increasingly incorporating generative AI into various academic tasks, such as automatic notetaking, creating tables and charts, receiving writing feedback, brainstorming, preparing for class, and simplifying explanations. Large language models are already integrated in essential tools for student work, such as word processing software and internet search. The prevailing scholarly view suggests that this pervasive technology cannot be ignored. Rather than shunning generative AI, much of the literature advocates for a new era of pedagogical innovation with a focus on optimizing the learning environment.
These insights from educational literature might have limited application in the context of law teaching. Legal education in the U.S. is highly distinctive in its focus and methodology, aiming to instill professional training, identity formation, and a particular form of reasoning, that is, learning to think like a lawyer. In doctrinal courses, students generally read cases, attend large classes, participate in Socratic cold calling, and are assessed largely by scenario-based essay exams. In some respects, these characteristics distinguish legal education from the undergraduate learning environments, which are the main focus of the educational literature reviewed above. It may be the case that generative AI is especially useful at lower levels of education, such as high school and undergraduate courses. However, it may be less effective at higher levels of education, such as graduate programs, which deal with more complex information and require more sophisticated reasoning skills. Recall that the research on the use of GPT-4 in law exams shows strong performance on multiple choice questions but weaker performance on essays and analysis of complex fact patterns.
Yet, it is possible that this notion of legal exceptionalism is overstated. Legal education places a strong emphasis on developing writing abilities, which is a central focus in the literature on teaching with ChatGPT. Moreover, generative AI’s legal capabilities are advancing, which may help reduce concerns about errors and low-quality legal outputs.
V. Recommendations: The Case for AI-Integrated Legal Education
The perspectives reviewed in Parts I through IV collectively point toward the same conclusion: the time has come to begin incorporating generative AI in law teaching. Part I suggests that generative AI is already being adopted widely in the legal profession and transformative change is likely on the horizon. Most of the surveyed law students and faculty highlighted the need to develop AI-integrated curriculum, although with important caveats and limitations. The educational literature cited in Part IV paints a highly favorable portrait of teaching with ChatGPT and describes AI-integrated exercises that enhance traditional learning goals and develop technological competencies.
This Part includes nine key considerations and counterarguments that are decision-relevant for legal educators across teaching contexts. Legal educators may hold diverse views about these considerations, and some may feel that additional relevant considerations have been omitted. Moreover, analysis of these considerations should evolve as the technology advances in unpredictable ways. Thus, this Part may be best understood as a starting place for discussions of whether to incorporate generative AI for specific educational uses. These considerations may also serve as a helpful reference for instructors explaining AI policies to students.
A. Emerging Technological Competencies of Practice-Ready Graduates
As reviewed in Parts I through IV, one of the leading arguments for incorporating generative AI in legal education is that lawyers are adopting this technology in practice. LexisNexis has already released a system that uses large language models (including GPT-4) to assist in legal research and automate the drafting of legal documents. Lawyers are eagerly subscribing to new generative AI applications, with some start-ups claiming to have thousands of law firms on their waitlists. As these legal AI applications improve, they may soon lead to a world where using generative AI becomes a standard part of legal research and writing tasks. The surveyed faculty estimated an average seventy-one percent likelihood that the next decade will see generative AI assisting in most of what lawyers do. Some surveyed faculty emphasized that teaching students to use generative AI is now crucial to the goal of preparing practice-ready law graduates.
This notion that legal educators should teach AI literacy as a core professional competency may align with the trend toward experiential learning and calls for a more practical legal education. Some legal employers are already beginning to expect that students know how to use generative AI tools. Senior attorneys may look to their young associates in hopes that they, owing to their youth, have special insight into emerging technology. From the student perspective, learning to use AI tools may enhance job security and competitiveness in a changing profession. As one surveyed student noted, “[I]t’s not that AI will be replacing lawyers, but it’s that lawyers who use AI will replace lawyers who don't.” Faculty respondents offered nearly the exact same line, which is becoming a trope throughout the legal profession. An AI-integrated curriculum might help produce lawyers who are less likely to be replaced. This notion that teaching with generative AI can help to future-proof students is being raised across educational contexts.
Preparing practice-ready graduates requires not only instructing students in the proficient use of AI tools but also instilling in them a commitment to use these tools responsibly. Already, there are several instances of lawyers submitting AI-generated legal documents that contained hallucinated case citations. Instructors can help students identify ethical pitfalls in using generative AI before students encounter the pressures of legal practice. Students can be taught to evaluate AI outputs for the accuracy and quality of information, a crucial skill as generative AI is incorporated in the legal research platforms that lawyers use on a regular basis. Students can also learn how AI tools enhance efficiency and reduce the tedious aspects of legal practice. This could lead to more effective lawyering and possibly enhanced well-being in a profession afflicted with mental health struggles.
Counterarguments. Some of the surveyed faculty were skeptical that generative AI will soon have a transformative impact on the legal profession, and thus disagreed that teaching with AI is an important way to prepare students for their careers. Although most lawyers say they plan to use generative AI in their practice over the next year, a recent survey found that many have not yet done so. Perhaps we should wait and see how the technology develops, particularly as we are still making our initial encounter with the new wave of text-generating AI systems. We may be caught up in a moment of hype. As one student described in an emotional response to the release of ChatGPT, it is “half exciting, half terrifying!” It may be prudent to prepare students for the legal profession of today, rather than trying (and perhaps failing) to anticipate where the profession may be headed tomorrow.
Other surveyed faculty who believe that transformative AI is coming to the legal profession nevertheless disagreed with the notion that AI-integrated teaching is the best way to prepare practice-ready graduates. Some suggested that legal education should instead shift to an emphasis on the more “human-centric” aspects of legal practice—those aspects that AI is less likely to replace. These respondents framed this shift as an emphasis on “soft skills,” including negotiation, strategy, ethics, judgment, client collaboration, leadership, interaction, cultural competence, problem solving, and dispute resolution. Thus, legal educators should help students “move up the skills ladder,” focusing on higher-level strategic and interpersonal aspects of legal practice, while generative AI increasingly assists in the technical aspects of legal research and writing. As one faculty respondent noted, limiting the focus of legal education to the efficient production of AI-assisted legal documents may convey the notion that lawyers are “memo-producers and issue spotters,” rather than protectors of clients, the legal system, and the public interest. As Professor Milan Markovic has observed: “We can’t act like robots and not expect to be replaced by robots.”
B. Inviting Students into Discussions About the Future of the Profession
Students are the future of the legal profession and should be invited into discussions about the emerging integration of machine and human intelligence in the practice of law. As the technologists in Part I suggest, we can expect continued progress in AI’s legal capabilities. At a minimum, the likely near future includes powerful legal AI assistants, which are already being deployed. The upper bound of possible futures includes robot lawyers who perform most legal tasks at or above the proficiency of human lawyers. As discussed in Part II, students are already asking what it means to be a lawyer when core legal skills may be on the verge of AI replacement. Many students feel disoriented by the past year, when AI went from scoring below the first percentile on the bar exam to passing it comfortably.
Students should be prepared for their role in shaping how generative AI is used in the legal profession. This includes questions of ethics, the possibility of automated legal services, threats to the rule of law, and the changing nature of the lawyer role. Some students may embrace the notion of “liberating professional expertise” (to quote Susskind) through automated legal agents, although this could lead to a sense of self-contradiction. Students’ support for the development of powerful legal AI tools might be driven by public-interest ideals, but they may also perceive these tools as a threat to their livelihoods and their self-conceptions as lawyers. Professionals in other fields have already experienced significant disruption from generative AI (e.g., artists, journalists, and programmers) and have reported feeling dejected and somewhat mystified when AI accomplishes much of what they understand to be their craft. Students are hungry for guidance and informed conversations about these issues.
Counterarguments. It may be prudent to wait until there is more certainty that generative AI will play a major role in legal practice before engaging students in conversations about future worlds of transformative AI. As discussed in Part I, some legal scholars are skeptical that AI will be capable of taking over most of what lawyers currently do. It is also possible that the use of powerful legal AI will be constrained by bar regulators. Moreover, many students are focused on more immediate objectives, such as seeking employment and passing the bar exam. These students might prefer to limit discussions about the future in the legal curriculum.
C. Legal Learning
A key finding from the pedagogical experiment described in Part II is that AI-integrated teaching can support traditional legal learning goals. For example, students tasked with using generative AI in their writing can be encouraged to converse with the AI rather than relying on the AI’s first response to a prompt. This iterative process requires students to carefully reformulate their prompts, ask follow-up questions, and identify the often subtle errors and omissions in seemingly polished AI responses. Some of the surveyed students noted that this process required greater attention to detail, knowledge, and discernment when compared to their approach to traditional law school quizzes and assignments.
The new educational literature on teaching with ChatGPT has emphasized similar learning benefits, as reviewed in Part IV. By iterating with generative AI, students can receive personalized feedback through an active learning process. This contrasts with law school classrooms, where each student generally has only occasional opportunities to engage in dialogue with the instructor. Students can ask AI applications to provide hypotheticals that help “transfer” their knowledge to new contexts, a crucial step in the process of understanding difficult concepts. By permitting students to use these tools, educators may be able to elevate their learning goals with a greater focus on critical thinking and more advanced legal concepts. Like a calculator in a mathematics class, generative AI can help with some technical aspects of a student’s legal work so that the class can move on to deeper understandings and real-world applications.
Counterarguments. A few of the surveyed students and many of the surveyed faculty expressed concerns about how AI-integrated teaching might inhibit traditional learning goals, such as lessons in legal reasoning, language, writing, and advocacy. Current generative AI applications might not (yet) be good enough to provide law students with high-quality tutoring and sufficiently accurate legal information and analysis. Or the AI might be too good. Students might use generative AI as a shortcut to complete assignments rather than learning the material themselves. For example, a student could use AI to distill key points from a difficult judicial opinion in the casebook, rather than reading the case. This could undermine the learning benefits that come from struggling through the complexity of legal texts—which reflects the complexity of legal disputes. Another example would be students relying on AI-generated writing uncritically without putting in their own work to challenge, validate, and revise (or reject) the AI output. If AI serves as a “crutch,” to cite the term used by surveyed students and faculty, students might be left unprepared for the bar exam and for competent legal practice.
D. Equality Among Students
To quote a surveyed law teacher, incorporating generative AI in the legal curriculum may help to “level the playing field” among students. If law schools do not offer instruction in the use of emerging legal AI tools, some students might be left behind. A subset of students who are tech oriented and have available time and resources will gain a head start in using legal AI—before they meet with potential employers and start their legal careers. This inequality might be mitigated if law schools assure that all students have a basic familiarity with the leading AI tools.
The use of generative AI in legal pedagogy could also help address other issues of equality in the learning process. The educational literature on teaching with ChatGPT suggests that the personalized learning experience of interacting with conversational AI can help students who learn in different styles and at a different pace than the majority of their peers—and in a way that differs from the instructor’s approach. Moreover, recall that the research on student use of GPT-4 in law school exams found that students who are generally the lowest academic performers experience the greatest benefit from using AI. These students rose, on average, forty-five percentile points when they used GPT-4. The authors of the study suggest that generative AI may have an “equalizing effect on the legal profession,” with a more uniform distribution of legal proficiency. It is not clear what implications this finding has for learning. It is possible that collaborating with generative AI may have revealed latent potential in otherwise low-performing students, who might have a deep understanding of legal concepts yet struggle to adapt to the format of legal instruction and assessment.
Counterarguments. While an AI-integrated curriculum would ensure that all students have some exposure to emerging AI, it could conceivably widen inequalities by prioritizing technological abilities that favor some students over others. If students are permitted to use generative AI in graded assessments, the students who are most proficient with AI applications may gain a grading advantage. Considering that the use of generative AI is emerging as a valued professional skill, perhaps it is fair for this professional competency to factor into grading criteria. However, it is also plausible that technological abilities would play an outsize role in academic performance, leading to assessments that fail to accurately measure students’ legal knowledge and capabilities.
These equity concerns may be exacerbated in the context of rapid change and innovation in the field of legal AI. Instructors might struggle to keep up with the AI applications that are available to students, making it difficult to ensure that students have access to the same technology. An instructor could conduct a thorough review of the current legal AI offerings, although this task required several months for the author of this Article, along with a dedicated research assistant, attending trainings and demonstrations and experimenting with various applications. This scan of current technology may need to be updated over the course of an academic term as legal AI applications continue to be released and improved. Thus, it is difficult for an instructor to establish a consistent policy about the use of specific applications at the beginning of the term. For example, at the time of this Article was written, just before the Spring 2024 semester, it was not clear when Lexis+AI and Microsoft 365 Co-Pilot would offer a wide release of the powerful new generative AI tools that they have developed. Should these tools become available midterm, disparities may arise if some students gain access earlier than others due to prohibitive cost or differing levels of engagement with emerging technology. In light of these uncertainties about student access to the technology, it may be wise to prohibit the use of generative AI in certain aspects of the legal curriculum and assessments.
E. Academic Integrity
Another argument for permitting students to use generative AI is that, in some contexts, perhaps no feasible alternative exists. Students have physical access to generative AI tools in many traditional law school assessments, such as take-home writing assignments and open-internet exams. Even if educators attempt to ban the use of emerging AI tools, students may continue to submit AI-generated work in violation of course policies.
Students have always had means to cheat in law school, including talking or messaging with classmates, paying someone to do the work, copying and pasting materials without citation, sharing exam questions with students who have not yet taken the exam, or consulting with other acquaintances who have legal expertise (e.g., lawyers in the family). However, such methods of cheating necessitate the involvement of co-conspirators, a requirement that not only introduces significant difficulties and risks but may also provoke particularly intense ethical qualms for the student engaging in the cheating.
In contrast, cheating with AI is as easy as opening an application that many students use on a daily basis. This does not require conspiring with another person, who might decline the request and shame or even report on the attempted cheater. The temptation to use generative AI, even when prohibited, may grow as students become increasingly accustomed to using this technology in internet searches (e.g., Bing powered by GPT-4), legal research (e.g., Lexis+ AI powered by GPT-4 and other large language models), and word processing (e.g., Microsoft Co-Pilot 365 powered by GPT-4). The ubiquity and normalcy of using these tools may diminish the student’s ethical imperative to comply with a course policy prohibiting the use of generative AI. Moreover, cheating with AI carries minimal risk of being caught, as AI-detection software is far from perfect, offering a significant rate of false positives and false negatives. Even if AI-generated text is “watermarked,” students can evade detection by using “rewriting” software, which replaces some text with synonyms. Or students can use AI outputs as a reference without verbatim copying and pasting.
If many students use generative AI in spite of prohibitions, a blanket ban could prove ineffective—essentially rewarding those students who ignore the ban. To uphold academic integrity, it may be more practical to allow all students to use generative AI. Under this permissive approach, instructors may wish to provide students with instructions on the use of generative AI so that they are on a more equal footing with the technology. As discussed further in Part VI, a permissive policy can be accompanied by explicit guidelines about how students should cite their use of AI applications.
Counterarguments. If one considers AI-facilitated cheating a grave threat to academic integrity, one might favor assessments where students cannot physically access generative AI applications. This can be accomplished through in-person, closed-network exams. This approach reinforces the notion that integrity matters and that dishonesty will not be tolerated in legal education, just as it is not tolerated in the profession.
Interactive assessments, such as debates and presentations, also limit the direct application of AI-generated content. Although students may use AI to assist in their preparation, they must still demonstrate their own knowledge and abilities in responding to questions and opposing arguments.
F. Student Demand for AI-Integrated Instruction
The survey presented in Part II indicates strong student interest in learning about emerging legal AI applications. The majority (80%) of surveyed students agreed that “all law students should be aware of and learn to use” generative AI applications. As this technology becomes more deeply embedded in the legal research and writing workflow, the student demand for AI-integrated instruction is likely to grow. For example, if students become accustomed to prompting Lexis+ AI and Microsoft Word to generate drafts—and if these students know that lawyers are using this technology—instructors who ban these applications may seem out of touch with current practices. Already, educational literature suggests that high school and undergraduate students are using generative AI in their class preparation, notetaking, and essay writing. This trend of “self-initiated adoption” among students, increasingly supported by instructors, may lead to a new generation of law students who expect to use these tools in their academic and professional lives.
If students feel that their instructors are failing to meet demand for AI-integrated curriculum, this could add a new layer to the long-running criticism of U.S. legal education for its outdated and untested curriculum, grading system, and pedagogy, as well as globally unprecedented high tuition rates. Moreover, free and low-cost Socratic chatbot tutors are being developed in partnership with leading AI laboratories. As already noted, these tools have received significant support in the educational literature for providing personalized instruction and adapting to individual learning needs. If generative AI is already, or will soon be, effective at teaching legal doctrine and skills, it may be reasonable for students to expect these technologies to be integrated in their legal education.
Counterarguments. The enthusiasm for AI-integrated legal education was not universal among the surveyed students, with some reporting that the use of generative AI was distracting and unhelpful. Moreover, the scope of the student survey was confined to a select group of students from a particular law course, which limits the generalizability of the findings. Further empirical research is needed to accurately gauge the extent of student demand.
Even if it can be shown that most students desire AI-integrated learning, this demand does not necessarily resolve the question of whether legal educators should bring generative AI significantly into their teaching. Although educational theory has emphasized collaboration with students, who are cast as “partners” or “clients” of instructors, this literature does not recommend that instructors acquiesce to every student demand, “succumbing to all of [students’] wishes and wants.” Surrendering too many pedagogical decisions to students can risk reducing education to a “consumer model,” undermining the notion that educators impart new information and broaden perspectives. Teachers may have greater insights into educational goals and methods relative to students, who are encountering the course content for the first time and generally lack pedagogical expertise. An educator who, upon consideration of all relevant factors, decides not to incorporate generative AI may be justified in overriding student demand.
G. Engaging Students in the Law and Regulation of AI
Many of the technologists, students, scholars, and teachers cited in this Article suggest that we are entering a transformative age of AI innovation. Some argue that generative AI will have greater implications for society than the advent of the internet or perhaps any other technological development in human history. Even the notion that AI might surpass human intelligence or pose a threat of human extinction has moved from science fiction to mainstream political discourse. These concerns for the future are inherently speculative, but generative AI also raises immediate regulatory challenges, including concerns about privacy, intellectual property, bias and discrimination, misinformation (including legal misinformation), the integrity of elections and other political processes, and safety considerations relating to use in weaponry and other hazardous applications. If generative AI is as important and potentially perilous as many believe it to be, perhaps law schools have an obligation—under the longstanding commitment to “train cross-disciplinary societal problem solvers”—to engage students with cutting-edge AI so that these future lawyers can play important roles in shaping the regulation of this powerful and ever-evolving technology.
Counterarguments. Some of the surveyed faculty expressed doubts regarding the assertion that we are living in a time of transformative AI development. Consequently, they argued that AI should have little influence on the nature of legal education. In contrast, others acknowledged the advent of transformative AI but suggested that there will only be a small subset of self-selecting law students whose careers will focus on AI law and policy. This subset might not require AI-integration across law courses in order to pursue impactful careers dedicated to AI regulation. Moreover, reorienting the curriculum to support this small subset might fail to serve the majority of students in the pursuit of their diverse interests.
H. The Feasibility of Teaching with Generative AI
Most of the surveyed faculty concluded that legal education should start incorporating generative AI, although they acknowledged the need for training to familiarize themselves with the technology and its educational applications. The task of teaching with generative AI might be less daunting than initially perceived. A number of academic fields have already demonstrated the feasibility of teaching with AI. One of the striking features of current generative AI applications is their accessibility. Rather than command lines, programming languages, and complex graphical user interfaces, generative AI operates through natural language conversations with the user. The students in the AI-integrated course described in Part II seemed to require minimal instruction to begin using these tools effectively. Similarly, faculty can gain proficiency in these tools, and brainstorm educational uses, through first-hand experimentation.
Counterarguments. Despite the relative ease of using current AI applications, some legal educators may be hesitant to adopt this technology in their teaching, owing to tech aversion or specific reservations about generative AI. Many of the surveyed faculty emphasized that they had only just begun thinking about this technology and were not familiar with the tools available. Some suggested that it would be better for students to explore AI on their own, since students are likely to be more adept with emerging technology relative to their instructors. It may be unrealistic to expect faculty to keep up with their students on the latest AI advances. If instructors tend to lag behind students, it may be difficult for them to teach about the technology and to assess student use of AI in graded assignments.
Even if legal educators are willing to learn about generative AI, some may be unwilling to change their course plans, which have solidified over years of careful consideration and refinement. Legal education is notoriously slow to change and steeped in pedagogical tradition—although recent decades have shown movement toward more inclusive, formative, experiential, and tech-integrated teaching methods.
I. Secondary Benefits for Effective Pedagogy
Incorporating generative AI in their teaching might lead instructors to find other beneficial uses of the technology across different aspects of their professional duties. As the surveyed faculty emphasized, generative AI can reduce an instructor’s “rote activities,” such as creating slides, writing teaching notes, and formulating exam questions. It has the potential to assist with administrative duties, from composing recommendation letters to crafting remarks for events and organizing notes for committee meetings. The adoption of AI tools for some teaching purposes may inspire other ideas for the use of educational technology, such as AI-based feedback mechanisms, interactive educational games, debate exercises, and simulated dialogues with virtual characters.
Counterarguments. Some of the surveyed faculty suggested that AI applications are not yet capable of aiding law teachers effectively. Current generative AI applications may provide flawed teaching materials and substandard feedback on student work. Moreover, perhaps the question of whether to bring AI into coursework should be kept separate from the question of how AI might be useful to teachers in other respects. Law schools can, for instance, provide regular training to update instructors on new educational tools, as well as the use of generative AI for some administrative work, without necessarily integrating AI throughout the legal curriculum.
VI. Implementation: Designing an AI-Integrated Legal Curriculum
This Part discusses the integration of generative AI in specific law school courses, exercises, and assignments. If one concludes that the factors presented in Part V make an especially strong case for AI-integrated legal education, one might favor an “immersion” approach, in which students use AI tools throughout the curriculum. Some legal scholars have long called for a “coupling” of legal learning and emerging technology. But Part V also raised significant downsides associated with bringing AI into legal education, including concerns about learning, equity, and feasibility, as well as a general sense that we should wait and see how the technology develops. These considerations might support AI integration in some but not all law courses. Section A takes up this question of where in the legal curriculum generative AI might be most effectively integrated. Section B then discusses specific assignments and course policies.
A. Where Should Generative AI Be Incorporated in the Legal Curriculum?
1. First-Year Doctrinal Courses
The surveyed faculty were split on whether the use of generative AI is appropriate in the first-year doctrinal curriculum. Some argued for a traditional approach, forbidding AI assistance as students learn their foundational lessons in “thinking like a lawyer.” As one faculty respondent put it, first-year students must acquire the “critical thinking skills necessary for crafting well-structured arguments and analyses,” which would be undermined if students were to rely on AI systems to summarize readings, draft papers, and write exams.
Moreover, some faculty suggested that first-year doctrinal courses should be taught without AI because these courses cover material tested on the bar exam, which is unlikely to allow AI access for the foreseeable future. Others emphasized academic integrity concerns, which might be heightened in the first year when grades are especially determinative of students’ job prospects, scholarships, and other opportunities. As already noted, research suggests that some students, especially those who would otherwise score near the bottom of the class, gain an enormous grading advantage when using generative AI on exams. On these grounds, one suggestion that emerged from the faculty survey was to save AI-integrated teaching for upper-division courses, where students would supplement the legal skillsets acquired during a more traditional 1L year.
Yet, other faculty respondents favored bringing generative AI into the first year because this would signal to students the importance of this technology for legal practice. Moreover, teaching with generative AI can support first-year learning goals, as reflected in the Property course described in Part II and the broader educational literature on teaching with ChatGPT described in Part IV.
2. Legal Research and Writing
Some surveyed faculty suggested that generative AI should play a significant role in first-year legal research and writing (LRW) courses. Large language models are already being integrated in legal research platforms, such as LexisNexis, and word-processing software, such as Microsoft Word. As lawyers gain access to these tools, the standard process of legal research may increasingly shift to a question-and-answer format—where the user asks the AI application (e.g., Lexis+ AI) to answer a question or produce a legal document. Lawyers using this technology may tend to start their writing process with an AI-produced draft rather than a blank page. For example, Microsoft Word with Co-Pilot can produce a draft memo based on prompts and background research materials. These applications can also be helpful in the revision process, suggesting edits and providing alternate phrasings. Given these implications for research and writing, perhaps LRW is an appropriate place in the legal curriculum for students to learn about this technology.
In an LRW course, generative AI could be utilized throughout the writing process, from research to outlining, drafting, and revising, as well as in preparation for oral arguments. Students could receive some training in the effective use of legal AI tools, including the framing of prompts and the importance of thoroughly validating and improving upon AI outputs. To reduce overreliance on AI, students could begin each step of the process on their own before turning to the AI for additional insights. Alternatively, the instructor could assess not only the final product, such as a legal memo, but also the students’ process of interacting with and responding to generative AI. Students can be required to “show their work,” providing the instructor with a record of prompts and a reflection on how they incorporated AI-generated materials in their final product.
LRW instructors could also advise students to seek writing feedback from a generative AI application. While the expert comments and guidance from instructors may be irreplaceable, instructors have limited time. Generative AI can serve as a useful supplement owing to its indefatigable ability to provide immediate responses to student questions. In the terms of educational scholarship, AI can play a crucial role as an educational “interlocutor,” which in the writing process could help students to overcome writer’s block, develop ideas through conversation, and consider different ways of framing and structuring arguments.
Even if an instructor is not convinced of these learning benefits, banning generative AI might prove infeasible in the context of LRW. Unless these courses shift to in-person writing assignments (i.e., exams), it may be difficult to prevent students from using AI applications. As discussed in Part V.E above, take-home assignments present students with a great temptation to use this technology, which is nearly ubiquitous and difficult to detect. If instructors cannot effectively prevent the use of AI in critical LRW assignments, perhaps they should permit it as a matter of policy—to avoid rewarding AI-facilitated cheaters. And if instructors permit this technology, perhaps they should train students in the use of emerging AI to ensure that all students have at least a basic competency.
3. Courses on Law and AI
It may be unrealistic to expect a great number of law teachers to incorporate an ever-evolving field of legal AI tools within their often firmly established teaching plans. The educational literature recommends providing teacher trainings on generative AI, which could help foster a culture of pedagogical innovation among faculty. These efforts are important but may yield limited returns when faculty lack technological proficiency, are reluctant to alter their teaching methods, or have reservations about the role of AI in legal education.
A more practical approach may be for a small, specialized group of instructors to teach courses focused on emerging AI. The surveyed faculty overwhelmingly supported the notion that law schools should offer such courses. A growing number of law teachers across the country (including the author of this Article) are beginning to teach seminars on AI in the legal profession, as well as curriculum for practicing lawyers, and are building a cohort of faculty to share advice and information for teaching in this area. These efforts are finding support among law schools developing new initiatives, centers, and programs on artificial intelligence. If the ability to use generative AI continues to grow into an important legal competency, law schools and even the ABA might consider mandating that all students complete a certain number of credit hours with a focus on legal AI applications.
AI-focused courses offer an opportunity for students to gain familiarity and proficiency with leading legal AI applications. For instance, students who learn effective prompt engineering techniques may be able to enhance the efficiency and quality of their legal work. When GPT-4 was prompted well, researchers have shown that it could achieve as high as an A- or an A on law exams. To further explore the ceiling of AI’s legal capabilities, students in an advanced AI seminar can even develop their own AI applications for specific legal use cases through highly accessible API and customization platforms. To take an example at the time of this writing, OpenAI’s custom GPTs can be created without any coding.
These courses can also teach students how legal AI tools operate, which may enhance their understanding and appreciation of the technology’s strengths and weaknesses. The students in Part II, who took a (lightly) AI-integrated course, were able to disabuse themselves of initial misunderstandings, including their initial impression that ChatGPT was a near omniscient legal mind. Teaching students about the inner workings of emerging AI has important implications for legal ethics, including concerns about competency (e.g., awareness of hallucinated materials) and confidentiality (e.g., inputting client information).
Equipped with a deeper understanding of the technology, students will be better prepared for discussions about how generative AI can and should be used in legal practice, and how this technology may affect the future of the legal profession. These courses can provide a venue for informed discussions on these topics. The instructor may wish to contextualize this material with coverage of related issues in AI law and policy, including intellectual property, emerging regulation, discrimination, and use in the judiciary, criminal law, and other aspects of the legal system.
4. Clinics and Experiential Programs
Generative AI could play a significant role in law school clinics and experiential programs, where students can gain firsthand experience with the efficacy and ethics of these tools as applied in practice. Students could use generative AI for a wide array of clinical tasks, including legal research, drafting pleadings, motions, and other documents, communicating with clients, and planning for meetings, depositions, and court proceedings. Instructors can create AI-based simulation exercises to prepare students for client interactions. For example, the AI can play the role of a client in a typical clinic practice scenario, and the student can rehearse asking questions and providing advice and information. Students could also use generative AI during client meetings, where the AI’s efficiency can help students respond in a timely fashion while taking the time to carefully validate AI outputs. Clients could also be encouraged to create their own AI-generated documents, which could be refined and improved by clinic students.
Clinics may also be well positioned to help develop legal AI in a way that enhances access to justice. To date, most legal AI is aimed at elite law firms. If the market is not producing applications that explore AI’s potential to make legal services more accessible, law school clinics can help to address this gap. For example, students in Suffolk’s Legal Innovation & Technology Lab have already been instrumental in developing technology for nonprofit organizations and other Suffolk clinics, including tools that assist in pro se representation and eviction defense. Clinics can help ensure that these tools align with the standards of legal ethics. The most prominent automated legal aid platform currently available, DoNotPay, has been the subject of ethical controversy, highlighting the need for cautious implementation. Clinics can provide a public-interest voice in the development of legal AI and in weighing its benefits and risks. They can also advocate with bar regulators for the appropriate regulatory innovations that may be needed to facilitate (and perhaps limit in some respects) the rise of automated legal aid applications.
B. How Should Generative AI Be Incorporated in Legal Education?
1. Assignments and Exercises
Some legal educators have taken an early step toward AI-integrated teaching by having students evaluate AI-generated legal writing. Typically, the instructor would bring to class a ChatGPT-generated response to a legal question relevant to the course content. Students would then be tasked with identifying errors, omissions, and other weaknesses in the AI’s writing. As a variation on this exercise, the AI output can be produced as a live demonstration. For example, an instructor at the University of Wisconsin Law School led a class where ChatGPT was first asked to list the elements of a claim for age discrimination under Wisconsin law, then cite leading cases and suggest defenses, and then draft a complaint to the Wisconsin Equal Rights Division. At each step of the process, students assessed and discussed the AI outputs. For a variation of this exercise, students can be asked to compare their own writing to an AI-generated response to the same question. Or students can be asked to rank different AI responses to a question, for example, from most incorrect to most correct.
Educational scholars emphasize that having students assess AI-generated writing can offer significant learning benefits. This literature recommends that teachers introduce these exercises with transparent communication about learning objectives, followed by student feedback and reflection on the use of generative AI and its limitations. Students can also be tasked with taking the AI-generated writing as a first draft and then making it their own through revision. This workflow reflects what may soon become a typical use of generative AI in legal practice, where lawyers may begin by asking AI to produce a draft and then working to validate and refine it.
An instructor can also bring generative AI into the classroom through lawyer-client simulations, as discussed in Part II, where the AI played a client and the class collectively played the lawyer inquiring about the factual basis of the client’s nuisance claim. The class can then ask the AI to produce variations on this fact pattern or suggest a similar hypothetical in a very different factual context, such as moving from rural neighbors with a bright security light to apartment neighbors with loud music. By producing immediate and detailed changes to the fact pattern, the AI can facilitate the key educational goal of training “transfer,” that is, it can help students apply knowledge to analogous contexts to better understand the underlying concepts.
An alternative to this simulation exercise would be to have the students prompt the AI to write a story, perhaps in the style of an essay exam question. The students can volunteer different facts to be used in the story, while the instructor inputs the prompts in full view of the class. The facts should reflect key points of legal doctrine. For example, under nuisance law, this would include the extent and character of the harm, the social value of the uses, and the cost and fairness to either party of lessening the harm. The class can then analyze the story and ask the AI to regenerate it using modified or additional facts.
For a more creative version of this exercise, the AI can be tasked with producing a poem, song, image, or video based on the facts suggested by students. These artistic outputs may open up different questions or different ways of thinking about, synthesizing, and remembering the doctrine under study. The interactive nature of these exercises can be highly engaging, bringing the class together to collaborate in the creation of prompts and then reacting collectively to the AI’s often humorous and unexpected artistic renditions. To maximize learning benefits, instructors should actively guide discussion around core doctrinal lessons and other learning goals, avoiding the potential for entertaining but off-topic digressions.
The exercises proposed thus far, where the instructor controls the AI interaction, offer considerable advantages. In their preparation for class, instructors can experiment with different prompts to understand how the exercise might best fulfill learning objectives and avoid tangential or low-quality AI outputs. They can design initial prompts to be inputted before the class exercise begins, which direct the AI to keep a focus on learning objectives and avoid inappropriate language. This instructor-led approach also minimizes any burden on students, eliminating the need for them to learn new software. For instructors who favor a minimal integration of generative AI, these exercises can offer an occasional and brief departure from usual teaching methods.
But the greatest value from an AI-integrated curriculum may come from putting emerging legal AI tools directly in the hands of students. One of the central findings of the pedagogical experiment discussed in Part II was that first-hand experience is crucial for helping students understand the practical value of this technology and correcting their misunderstandings about AI’s legal capabilities. As students employed generative AI in their class assignments, they described moving from initial awe to disappointment, followed by a more pragmatic appreciation for how these tools can assist in their work.
Instructors who mandate the use of generative AI may wish to provide a brief training on legal AI tools, particularly if students have not previously received such instruction. This training could be conducted by the instructor or an outside speaker, supplemented by resources for self-study. Some educators have provided students with guidelines to enhance the effective use of AI tools. An example is a “ChatGPT learning code,” which encourages students to hypothesize about how AI might respond to a prompt, reframe prompts, and then validate outputs.
For writing assignments, instructors can direct students to use generative AI at distinct stages in the writing process, including outlining, drafting, and revision. At any of these stages, students should be encouraged to iterate with the AI application rather than accepting its first response. Instructors can ensure that students engage in meaningful dialogue with the AI by requiring students to summarize this dialogue or even submit their full record of prompts and AI responses.
One approach to writing assignments would be to have students create their own outlines, to ensure that they think through legal issues on their own before being assisted by AI. Students could then turn to the AI application for suggestions for improving the outline, including help with brainstorming arguments and counterarguments. The students could then ask the generative AI to create a draft based on the outline.
Generative AI can also be a powerful aid in the revision process. Students can request feedback on their writing, identifying issues with grammar, structure, and argumentation. Perhaps the greatest benefit to student writing, based on the capabilities of current generative AI, comes from the AI’s capacity for instant brainstorming and proposing alternate phrasings. Students can prompt the AI to refine rough drafts of sentences, paragraphs, or even full papers. The students can ask for several options, selecting the most suitable AI-generated text or creating their own variant based on the AI’s suggestions.
Or this feedback process could be reversed. Some educators have described an exercise where the student plays the role of the teacher providing feedback to the AI (playing the role of a student) on the AI’s writing. This approach aligns with research indicating that teaching others is an effective way to solidify one’s own understanding. For example, an instructor might give students a standard initial prompt to submit to the AI application such as the following: “Write an essay for me about [class concept].” This prompt could be accompanied by guidelines about the style and structure of the essay. Students, in their role as the AI’s teacher, would then offer the AI detailed feedback and suggested revisions. For assessment of the exercise, students would submit to the instructor their prompts, drafts, feedback, and final AI-generated essay.
These exercises that put AI tools in the hands of students may help foster more personalized learning. Educational research on ChatGPT recommends that students engage with generative AI as an “interlocutor” in their studies, for example as a debate partner or tutor who provides feedback and illustrations of concepts. If students are able to use generative AI effectively in their preparation for class, this might aid in fostering a “flipped classroom” dynamic, where students come to class with a firm understanding of the basic course materials so that class time can be spent on more advanced topics. Generative AI can help overcome a key obstacle to flipped classrooms, which is that pre-class learning is traditionally difficult for students to accomplish on their own.
In the near future, a promising avenue for AI-integrated legal education may involve “adaptive learning systems,” which adjust to an individual student’s learning pace and style. For example, Khan Academy has taken the lead in developing these systems, including a chatbot tutor (“Khanmigo”) covering a wide range of subjects, although not (yet) covering typical law school subjects. OpenAI also provides instructions for users to create their own tailored AI tutors powered by GPT-4. These instructions advise the AI tutor to ask students what they want to learn, provide “explanations, examples, analogies” and other responses tailored to the student’s “learning level and prior knowledge,” and to take a Socratic questioning approach to “help students generate their own answers.” Law students can further adjust how their AI tutor functions, for example by using OpenAI’s customizable GPTs.
AI tutors may hold great educational potential, though empirical research is needed to examine their ability to provide sufficiently accurate legal information for instructional purposes. In the near future, we may see the development of commercial AI tutoring applications designed specifically for legal education. Students could be encouraged or even required to use such tools in their legal studies. This could prove a powerful supplement to traditional pedagogy. It is also possible that the potential affordability and personalized instruction offered by a legal AI tutor could disrupt and threaten the conventional model of human-led J.D. education, which is often costly and less personalized. Consequently, it may be prudent for legal educators to teach in collaboration with AI tutors, and to demonstrate that this hybrid (or human-machine-cyborg) teaching model provides the highest quality legal education.
Integrating generative AI tools in legal education, through the methods outlined above or others, can serve as a springboard for class discussions about AI’s role in the legal profession. Students can engage in critical reflections on the technology’s capabilities and limitations, as well as ethical pitfalls. Instructors can assign readings or bring in guest speakers to discuss how lawyers are using emerging AI tools. These conversations can also provoke deeper contemplation about the future of AI, addressing the widespread student uncertainty and anxiety about how AI might alter or displace the work of lawyers. As discussed in Part I, legal educators should “teach to the uncertainty,” encouraging students to adopt the forward-looking but uncertain mindset of a forecaster, while avoiding the tendency to dismiss or overhype future AI advances.
2. Assessments
One of the most pressing issues for legal educators is whether to permit generative AI in assessments, such as exams, papers, and other assignments. Instructors can take a wide range of approaches to this issue, including efforts to find a golden mean between a fully permissive policy and a fully prohibitive policy. For example, some instructors may permit the use of generative AI under specific circumstances and for specific assignments but not others. Thus, a syllabus informs students when and how AI tools can be used. Instructors might take a segmented approach, authorizing the use of generative AI for only some segments of an assignment. For instance, AI might be permitted during an initial brainstorming process when creating an initial outline, but expressly prohibited during drafting and revision. Or the policy may permit consulting with AI but not copying text directly from an AI output into the student’s own work.
Some educators have permitted students to use generative AI on the condition that it is properly acknowledged and cited. This approach roughly mirrors the policies of some judges, who require lawyers appearing in their courts to disclose the use of text-generating AI. A parallel trend can be seen with academic journals that have required explicit acknowledgement of applications like ChatGPT. Class policies under this framework could ask students to share their interaction with the AI application, including their prompts, the AI’s outputs, and how the students incorporated those outputs in their writing. For some applications (including ChatGPT), this can be accomplished very conveniently by exporting the full user-AI dialogue. Instructors can also offer guidance on the citation format, such as a requirement that citations include the name of the AI tool, the date of the AI interaction, and the wording of prompts.
Requiring students to document their use of generative AI could enhance academic integrity by providing instructors with a clear delineation of the AI’s contribution and the student’s contribution to a graded assignment. This approach may help foster a more process-based approach to assessment, as the instructor would gain a window into the student’s thinking process along the way to the final product. This form of assessment may also help to ensure that students iterate with the AI, rather than relying on an initial AI output, because the students would face the accountability of submitting their AI dialogues to the instructor. Absent a requirement that students detail their AI interactions, an instructor might unwittingly assess, grade, and provide feedback on works that were entirely AI-generated with little student input.
Although requiring students to acknowledge the use of AI may hold great potential benefits, practical challenges will arise as generative AI becomes increasingly intertwined with the everyday technological tools, including legal research, internet search, and word-processing applications. As this technology becomes more ubiquitous, it may be more analogous to the air that students breathe rather than a single application (e.g., ChatGPT) that students might consult from time to time. Asking students to cite every step in their process that involved generative AI, as this technology permeates routine functions like typing in Microsoft Word or searching for case law in LexisNexis, might prove impractical. For example, if AI tools provide advice or automate revisions without any prompting from the student (e.g., Grammarly, which currently operates as an add-in in Microsoft Word), it may grow difficult or tedious for the student to document these interactions.
Since the release of ChatGPT, some instructors have proposed the idea of “AI proofing” their exams. One such strategy involves devising exam questions that are tailored to the specific course content presented by the instructor. For example, an exam question could ask about a particular class discussion or a distinctive theoretical framework used by the instructor, which is not in the large language model’s training data. However, this approach may inadvertently play into the strengths of generative AI. As already noted, researchers have shown that GPT-4’s highest performance on law exams (in the A- to A range) comes from “grounded prompting,” where the AI is given teaching notes as a reference when answering questions. Given the increasing context length available for consumer generative AI applications, students can input an extensive portion of class notes, including verbatim transcripts of what was said in a class, or a full outline of a course. Grounded in these background materials, a generative AI application may give especially strong exam answers.
Another AI-proofing strategy would be to exploit current limitations of generative AI, such as its potential lack of access to legal research and its struggles with complex legal reasoning. But these efforts aim at a moving target due to the rapidly evolving nature of legal AI technology. Already, some suggestions for this form of AI-proofing appear to be obsolete, such as framing exam questions around recent or relatively obscure cases that are not in the AI training data. This approach may be undermined by the development of applications that provide large language models access to world-leading legal research databases (e.g., Casetext’s CoCounsel and Lexis+ AI).
Perhaps a more promising approach is to craft exam questions that surpass the legal reasoning capabilities of generative AI, although instructors relying on this approach should verify that the leading legal AI applications—in an ever-changing field of legal technology—cannot provide a high-quality exam answer with minimal student contribution. This verification process would be greatly aided by continued empirical research on generative AI’s performance in law exams, in the exemplary manner of studies led by Professors Jon Choi and Daniel Schwarcz in the first year of ChatGPT.
For instructors who deeply integrate generative AI in their teaching, designing more difficult exams may serve another function in addition to AI-proofing. The educational literature reviewed in Part IV suggests that teaching with generative AI can lead to more sophisticated class discussions covering more advanced material. Referencing the analogy to the initial reception to calculators within mathematics education, law teachers might consider allowing students to use generative AI as an aid in basic reading, research, and writing tasks, thereby elevating the in-class focus to the equivalent of more advanced calculus. It remains to be seen if these findings from educational research are applicable in the context of legal education. If so, law teachers can accordingly shift their grading criteria upward with a greater focus on the factors that traditionally distinguish the top exams in a law school class, such as quality of argumentation and nuanced understandings of complex doctrines.
Instructors who permit generative AI in exams and other assessments should establish a clear AI policy. Ideally, this should be promulgated from the beginning of the term in the course syllabus, detailing the conditions under which AI is permitted and the manner in which the use of AI will be assessed for grading purposes. It is essential to define the contexts in which these tools can be used. Students should be reminded that they are responsible for the accuracy of their work, with guidelines about citing AI applications. Instructors may also wish to caution students against sharing any identifiable information or sensitive data when inputting prompts. Students can be encouraged to review and understand any AI application’s terms of use, especially regarding data privacy, to prevent unintended storage or misuse of their information.
For instructors who opt to ban student use of generative AI for some or all assessments, this may require a traditional in-person, closed-network exam format. As discussed in Section V.E, any format that stops short of blocking access to the internet may present students with a great temptation to use generative AI in violation of restrictions. Further measures, such as confiscating students’ phones, could help prevent AI access, where a student might otherwise surreptitiously run an AI application on a phone or use the phone as a “hotspot” to provide internet access to a computer. An even more comprehensive safeguard would involve video proctoring.
These protective measures may prove inadequate as the technology advances and students gain new ways to access generative AI, which may include, for example, future developments in augmented reality embedded in smart glasses or smart contact lenses, new means of connecting student computers to the internet in spite of restrictions, or AI that does not rely on cloud computing. Moreover, extensive policing of student behavior during exams may promote an atmosphere of mistrust among students. Perhaps the only unassailable method of preventing AI use, without resorting to extreme surveillance, would be to give either handwritten or oral exams. Most faculty are unlikely to desire a return to handwritten exams due to the illegibility of student writing. Oral exams, as practiced in many European universities, can provide the instructor a direct opportunity to inquire about each student’s depth of knowledge, understanding, and ability. But instructors may consider this approach overly time-consuming and worry that it shifts focus away from legal writing, a crucial skill for legal practice and preparation for the bar exam.
Conclusions
Generative AI has already shown impressive legal capabilities. The near consensus among students and faculty surveyed for this Article is that law schools should begin adapting now, ensuring that students learn to use and critically evaluate the AI applications that are rapidly becoming a standard feature of legal practice. Educational scholars have identified a wide range of innovative pedagogical uses for this technology. Drawing from these perspectives, this Article has advocated for the integration of generative AI in legal education, citing a number of factors including preparing practice-ready graduates, improving legal pedagogy, enhancing equity, and inviting students into broader discussions about the changing legal profession that they will inhabit.
However, there is considerable room for debate about the specific courses and assignments where generative AI is most appropriate. This Article is an early step in a larger conversation that is just beginning. Legal educators should continually update their views as the technology develops and new insights are gained from the experience of teaching with generative AI. It may be prudent for administrators to permit individual faculty members to set their own course-specific AI policies. This could allow law schools to somewhat hedge on the question of AI’s proper role in the legal curriculum, as some faculty will maintain traditional pedagogies while others draw more heavily on emerging technology. The resulting diversity of teaching methods could serve as a laboratory in the collective effort to find best practices for teaching law in the age of generative AI.