chevron-down Created with Sketch Beta.

Litigation Journal

Winter 2025: Anniversary

The Future of AI in Law: Embracing the Hallucinations

Christopher John Schwegmann

Summary

  • It is clear that AI is a powerful tool, and many in the legal profession still underestimate AI’s long-term potential.
  • AI’s true impact may lie in the future, by freeing us to focus on strategic thinking, client counseling, and creative problem-solving.
  • Our task is to harness this potential thoughtfully and ethically.
The Future of AI in Law: Embracing the Hallucinations
Kilito Chan via Getty Images

Jump to:

Mark Twain once said of the typewriter: “The early machine was full of caprices, full of defects—devilish ones. It had as many immoralities as the machine of today has virtues.” Twain’s frustration with the early typewriter mirrors the unease many lawyers feel toward artificial intelligence (AI) today. Technology’s first steps are often shaky, and resistance is natural—but history has shown that tools once dismissed as capricious can ultimately reshape entire professions. Just as Twain wrestled with the erratic typewriter, lawyers today grapple with AI’s quirks—especially its so-called “hallucinations”—those instances where the system invents facts, quotations, and even case law.

The legal profession, steeped in tradition and precedent, has always had a cautious relationship with new technology. Take the introduction of Westlaw and LexisNexis as an example. Initially, the legal community resisted the adoption of computer-assisted research. How would law professors teach law students to perform legal research if not by thumbing through digests? Would the use of these electronic research tools diminish the role (and the number of billable hours) of junior lawyers? Would more seasoned litigators miss binding precedent by searching for authority with inexact or wrong Boolean search terms? Yet, today, few would argue that computer-assisted research did not revolutionize the practice of law, offering increased efficiency, accessibility, and accuracy that were previously unimaginable. The trajectory for AI in law seems poised for a similar story—overblown fears today, undeniable transformation tomorrow.

Roy Amara, a researcher and former president of the Institute for the Future, observed that while there is often a lot of hype surrounding new technologies when they first emerge, their true, transformative potential becomes apparent only over time as they are integrated into society. Many of us, in the initial wave of AI’s introduction into the practice of law, overestimated AI’s immediate (and mostly negative) effects on law practice. One lawyer in New York faced sanctions after an AI-generated brief cited several nonexistent cases. The legal community gasped—not at the lawyer’s obvious error, but at the audacity of the machine. These events, of course, made headlines, causing anxiety among lawyers and courts alike. In response, courts across the nation quickly adopted sometimes conflicting local and other rules governing the use of AI, and some have even banned its use entirely.

Yet, as the dust settles, it is clear that AI is a powerful tool, and many in the legal profession still underestimate AI’s long-term potential. When legal research transitioned from the law library to digital platforms like Westlaw and Lexis, many initially believed this change would upend the profession. In hindsight, it augmented our capabilities, making us more efficient and better prepared. To be sure, AI cannot replace the nuanced judgment and ethical considerations that remain at the heart of our legal practice. But Roy Amara’s insight into how technologies develop over time reminds us that AI’s true impact may lie in the future, by freeing us to focus on strategic thinking, client counseling, and creative problem-solving—areas that require the very human touch that AI lacks. We must avoid the trap of expecting too much too soon, but we also should not lose sight of the profound ways AI might transform the legal profession in the years to come. Our task is to harness this potential thoughtfully and ethically.

The Foundations of AI

Before addressing specific uses of AI by trial counsel, it is important to understand some concepts that underlie AI. At the heart of the profession’s hesitation toward AI is a fundamental misunderstanding of how these systems work. Traditional legal research tools like Westlaw are deterministic and extractive—they retrieve specific, factual answers based on a set of defined search terms. AI systems, on the other hand, are probabilistic. Generative AI, such as ChatGPT, generates output by predicting the next word in a sentence based on patterns derived from vast amounts of training data. It is not retrieving facts in the same way a search engine does; it is creating language, untethered from any direct correspondence with the real world.

This fundamental difference—AI’s reliance on the relationship between words instead of the words’ relationship with the world—explains why it generates these so-called “hallucinations.” When properly understood, all AI-generated output is a kind of hallucination. Literary theory may offer a more descriptive and intuitive explanation about how this works rather than a scientific or technical one. Semiotics, for example, is the study of how we use signs and symbols to create meaning. A central idea is that words are not the things they represent but are instead signifiers that point to the signified, the concept or object those words denote.

Jacques Derrida, a French post-structuralist philosopher, extended semiotic theory with his concept of “différance,” an idea that posits that true meaning is always deferred and that the gap between words and the things those words represent can never be fully closed. For Derrida, defining a word is like trying to catch a shadow—no matter how close you get, it moves just out of reach. Derrida argued that language operates through a system of differences and that words have meaning only in their relationship to other words. He emphasized the instability of language, suggesting that meanings are never fixed or final. For Derrida, language is like a river, constantly flowing and changing. The meaning of words, like water, is never fixed but always in motion, shaped by the context and surrounding landscape of other words.

Generative AI, like language, operates only within a system of symbols (data, words, concepts). When AI generates language, it has no direct “understanding” of the world but is instead just manipulating symbols based on patterns learned from other words and symbols in the training data. In this way, AI is engaged in a semiotic process—it produces signifiers but without any direct connection to the signified.

Derrida’s insight offers a useful lens for understanding AI’s so-called hallucinations. Just as language only approximates meaning, AI generates text by predicting patterns—not by grasping reality directly. The “hallucinations” AI generates are a striking example of how it operates within this gap. Without a grounded understanding of the real world, AI generates words and phrases that seem logical—and even probable—but lack connection to the world those words describe. Its “understanding” is just a form of pattern recognition, always one step removed from the real-world referent. AI produces output in which the system continuously refines predictions based on probabilistic associations without any direct comprehension or understanding of the world.

But Derrida’s exploration of this endless play of meaning also opens a positive view of these so-called hallucinations. AI’s generation of language (even when “incorrect”) reveals new or unexpected connections, much like the creative potential found in Derrida’s différance. A word that meant one thing a decade ago—like “hipster”—might mean something entirely different today and might also carry an entirely different connotation. The inability of AI to understand the world can lead to new kinds of meaning—insights generated through novel combinations of words and ideas, much like literary or poetic interpretation that thrives on ambiguity. In this way, AI is inherently creative, especially when understood and deployed properly.

How AI Generates Text

The criticism that AI outputs may be biased, while true, also stems from a misunderstanding of how AI generates text. AI, as a machine-learning system, does not possess agency or intentionality—it does not have beliefs, values, or motivations, and therefore cannot be biased in the human sense. Its “bias” arises from the dataset it is trained on, which reflects the biases embedded in human language, culture, and historical data. Language itself is not neutral, and the gap between words and the things they signify carries with it the weight of cultural, social, and historical contexts. These contexts shape the language patterns that AI learns from, meaning any bias in the data is not generated by an AI itself but is inherited from the society that produced the data.

When AI generates a response, it is not doing so with intentionality but rather predicting the most statistically probable string of words based on its training data. The output mirrors patterns in the data—patterns that may carry implicit or explicit human biases related to race, gender, class, and more. In this way, AI’s outputs reflect the biases present in the structures of human language and thought, much like Derrida’s insight that language never escapes the cultural and social influences that shape it. Just as language cannot be separated from the biases embedded in its use, neither can AI escape the biases present in its training data. AI’s role is simply to manipulate symbols and signs; it is the human’s role to recognize, interpret, and correct these inherent biases.

Lawyers and professionals using AI must act as interpreters, always aware that the signifier (the AI-generated text) is shaped by the broader systems of meaning it emerges from. This process involves scrutinizing AI outputs for biased assumptions or skewed perspectives and correcting them by adjusting how AI tools are used, revising input data, or supplementing the results with human judgment. Lawyers, as interpreters of legal truth, have a crucial role in guiding AI’s use, correcting its errors, and harnessing its capabilities. In this way, the most common criticisms of AI become an argument for human oversight, not an indictment of the tool itself.

AI Hallucinations as Creativity

Consider an unexpected idea: AI hallucinations are not a bug in the system, but a feature. AI systems are not bound by the same logical constraints that limit human thinking. These hallucinations are moments when AI makes connections that we, as humans, might not otherwise conceive. AI does not know what is true, but that is precisely why it can generate novel ideas, associations, and arguments. Hallucinations happen because AI is not tethered to reality in the same way our computer-assisted research tools are (or should be). And that untethering is precisely what makes AI so valuable.

In legal practice, creativity often involves connecting dots that are not immediately apparent. The map is not the territory, as the saying goes, and AI offers us a map that, while not always accurate, can guide us toward unexpected and innovative legal strategies. Nor should we underestimate the value of hallucination.

In When We Cease to Understand the World, the Chilean author Benjamín Labatut blends fact and fiction to explore how scientists like Erwin Schrödinger and Werner Heisenberg wrestled with the strange, counterintuitive concepts of quantum mechanics. One of the fascinating elements in Labatut’s book is how these scientists arrived at their revolutionary ideas not through pure rational deduction but through creative misinterpretation—engaging with philosophy, art, and even fiction, and allowing these influences to shape their scientific thinking in unexpected ways.

Schrödinger, best known for his famous equation and his ill-fated cat, was deeply influenced by Eastern philosophy. He was fascinated by the idea of a unity of consciousness and reality. Schrödinger’s contemplation of these non-Western ideas about the nature of reality played a role in shaping his approach to quantum mechanics, where the concept of superposition (a counterintuitive theory in which atomic particles exist in multiple states simultaneously) parallels the mystical notion that all things are connected and indistinct.

Similarly, Heisenberg, known for his now famous Uncertainty Principle, which asserts that certain pairs of properties (like position and momentum) cannot be simultaneously known with perfect accuracy, was also influenced by literature and broader cultural concepts. Heisenberg struggled with the breakdown of classical physics at the subatomic level, where the certainty of Newtonian mechanics and its equations no longer applied. As he worked through these scientific puzzles, he drew on more abstract, almost fictional ways of thinking to conceptualize a world in which uncertainty was fundamental. Like Schrödinger, Heisenberg expanded his thinking beyond the immediate context of his scientific inquiry. His immersion in broader intellectual currents allowed him to creatively misinterpret the laws of physics, leading him to understand that the uncertainties in measurement were not just technical limitations but intrinsic to the nature of reality itself.

Both grappled with ideas that, at the time, did not make sense in the context of classical physics. Yet, these “hallucinations” of abstract mathematics and paradoxical principles eventually became foundational to quantum theory. Labatut’s literary representation underscores how, throughout history, what initially seemed like wild, irrational ideas—akin to hallucinations—have sometimes paved the way for significant scientific advances. These examples highlight that many scientific discoveries emerge not from strict adherence to method, but from creative misinterpretation—the act of taking ideas, metaphors, or concepts from one realm (like philosophy or fiction) and applying them to another (in this case, quantum mechanics). Schrödinger’s mystical interpretations and Heisenberg’s abstract thinking about uncertainty allowed them to break free from conventional thought and see the problems in quantum mechanics from entirely new angles.

Schrödinger’s and Heisenberg’s willingness to explore irrational ideas offers a lesson for lawyers today: Creative misinterpretation—whether in physics or law—can unlock new ways of thinking. Just as Schrödinger’s and Heisenberg’s engagement with seemingly unrelated ideas led to breakthroughs, AI’s unexpected linkages between legal concepts—though initially misaligned—could inspire lawyers to rethink a case or approach a problem from a new angle. By embracing creative misinterpretation, lawyers might discover alternative ways to frame arguments, interpret precedents, or even imagine new solutions to long-standing legal problems. These moments of “hallucination” or error, when properly interpreted, can be seen not as failures but as opportunities for insight.

AI’s Role in Law

To make sense of AI’s role in law, we might turn to Isaiah Berlin’s famous analogy, which he borrowed from the ancient Greek poet Archilochus: the fox and the hedgehog. The fox knows many things; the hedgehog knows one big thing. AI, like the fox, draws on vast amounts of data, generating diverse and wide-ranging ideas. Lawyers, more like hedgehogs, rely on deeply understood legal principles. Foxes are more adaptable and versatile, drawing from various disciplines, perspectives, and experiences. Foxes are comfortable with ambiguity and change, preferring to tackle problems using a range of methods. The future of AI in law might well involve marrying the two—using AI’s broad creativity while anchoring it in legal truth. Lawyers can rely on AI to generate wide-ranging ideas (the fox) while using their expertise to home in on the most relevant and sound arguments (the hedgehog).

Some lawyers adopt the mindset of the hedgehog, relying on one core legal principle or approach that guides their entire strategy. Hedgehog thinkers are likely more skeptical of AI-generated hallucinations because they see them as distractions from the established methods. However, even a hedgehog can benefit from AI by using it to confirm or strengthen the “one big thing”—ensuring that the hedgehog’s focused approach stands up to creative challenges.

Fox-like lawyers, by contrast, might be more open to AI’s hallucinations, seeing them as an opportunity to explore multiple legal strategies, anticipate different arguments, and test various hypotheses. Foxes can harness these hallucinations to foster innovation, generating novel arguments and anticipating counterarguments in ways that a hedgehog might miss. A key insight from the fox and hedgehog analogy is that the most successful use of AI—and its hallucinations—requires balancing both approaches. While AI is foxlike in generating a wide range of ideas, human lawyers (both foxes and hedgehogs) must act as the arbiters of those ideas, filtering through them to identify which are actionable, practical, and aligned with the legal strategy. The trick is to harness the fox.

Hallucinations can be viewed as an AI presenting a “menu” of creative options. However, just as reading a menu does not constitute eating the meal, these hallucinations are not actual solutions—they are potential options to explore. AI hallucinations, when understood, embraced, and even sought after, offer concrete, creative use cases for lawyers.

For example, challenging core assumptions can be an effective way to uncover blind spots in legal reasoning. A lawyer might prompt an AI to challenge the foundation of an argument by asking, “What hidden biases might be influencing this line of reasoning?” or “Challenge the core assumptions in this legal argument.” Such exercises force a reevaluation of positions, encouraging stronger, more comprehensive reasoning. Similarly, asking an AI to play devil’s advocate helps anticipate counterarguments. Prompts like “Argue against this point as if you were opposing counsel” or “What is the strongest argument an opponent might use to discredit this claim?” allow lawyers to see through the eyes of their adversaries and prepare for potential challenges. It is not unlike brainstorming with an eccentric colleague—sometimes the wildest ideas spark the best insights.

AI can also assist in role-reversal exercises, helping lawyers imagine the consequences of failure. Posing questions like “Assume my argument fails in court—what are the ripple effects?” enables the exploration of worst-case scenarios and provides insight into crafting fail-proof strategies. In addition, AI can reframe complex legal arguments through analogies: “Explain this legal issue as if it were a chess match” or “Break this down as if it were a football game.” Such metaphors reveal new strategic parallels and simplify complicated ideas.

Lawyers can take this further by adopting a “what if” approach, exploring hypothetical scenarios. Prompts such as “What if the facts were different in this specific way?” or “How would this case change in a different legal jurisdiction?” provide alternative perspectives and open new avenues for argumentation. An AI can also lend insight by adopting an interdisciplinary perspective, offering input as though from another profession—for example, “Analyze this argument as a behavioral economist” or “What would a psychologist say about this reasoning?” This broadens the scope of analysis beyond purely legal thought.

A rapid-fire brainstorming session with an AI can generate spontaneous insights: “List 10 things that could go wrong with this argument” or “Give me 5 wild-card insights about this case.” These quick responses inspire creative problem-solving. Similarly, an AI can assist in identifying logical fallacies, with prompts like “Does this argument suffer from confirmation bias?” or “What logical fallacies might weaken this reasoning?” Engaging an AI in time-travel thought experiments introduces historical and futuristic perspectives. Asking questions like “How would this case have been argued in the 1800s?” or “What might this case look like 50 years from now?” provides fresh angles that can challenge contemporary assumptions. An AI can also help lawyers focus on emotional or narrative elements, asking, “What emotional triggers might sway a jury?” or “How can I incorporate storytelling to engage non-experts?” These responses tap into the power of persuasion through narrative.

An exploration of unintended consequences is another valuable use. Lawyers can ask, “What are the unintended societal or ethical consequences of winning this case?” to foresee complications and refine strategies. AI also excels at collaborative brainstorming, offering creative defense ideas with prompts like “Let’s brainstorm three unique strategies for presenting this argument.” Asking for counterintuitive insights—“What advice would most people ignore but could be effective?”—pushes lawyers to think beyond conventional solutions. Imagining dialogues between opponents sharpens argumentation. Lawyers might ask, “Simulate a real-time debate between me and opposing counsel,” helping them identify weak points and refine responses. Finally, AI can offer insights by using different writing styles, simplifying arguments for different audiences. Prompts such as “Explain this argument as a children’s story” or “Present this as if I had only five minutes before the judge” ensure clarity and expose overly complicated reasoning.

Another useful strategy involves asking an AI to approach legal arguments from the perspective of a persona—an identity assigned to an AI to frame its output. These personas can be generic, like “the chief legal officer of a Fortune 100 company,” who might focus on risk management and business strategy, or “a law professor,” who would prioritize theoretical clarity and doctrinal consistency. You might also select client-facing personas, such as “a client facing a legal crisis,” which could yield insights into the emotional or practical concerns that legal professionals sometimes overlook.

In addition, AI can be prompted to adopt historically specific personas—such as Clarence Darrow, whose responses might evoke an emphasis on social justice and rhetorical flair, or Justice Elena Kagan, whose tone might reflect deep engagement with constitutional principles and precedent. AI might even channel the humor and pragmatism of Abraham Lincoln, offering responses grounded in moral clarity, persuasive storytelling, and wit. These tailored personas allow an AI to surface different perspectives and argumentative strategies, simulating the kind of thinking a lawyer might encounter from real-world stakeholders or adversaries.

Interestingly, you can push this concept even further by asking an AI to embody fictional personas whom an AI recognizes from its vast training data. Imagine framing an argument from the perspective of Atticus Finch—his measured, moral reasoning might help surface ethical dilemmas or narrative strategies in a case. Alternatively, invoking Descartes’s “Evil Demon” persona might force an AI to challenge foundational assumptions, revealing logical gaps or vulnerabilities. AI as Sherlock Holmes might unearth hidden patterns or inconsistencies in evidence, and his nemesis Moriarty would excel at finding hidden weaknesses or exploiting loopholes in your case, forcing you to strengthen any vulnerable areas or assumptions. And if you are feeling particularly creative, summoning the wisdom of Athena (goddess of strategic warfare) or the artistry of Orpheus (the mythic poet) might inspire responses that are unorthodox but unexpectedly insightful, such as legal metaphors or narrative arcs that human practitioners might never consider.

These personas are not just thought experiments—they have practical applications in case development, trial preparation, and appellate work, transforming an AI into a versatile tool that lawyers can leverage strategically. In preparing jury arguments, for example, prompting an AI to adopt the persona of a skeptical juror can help lawyers anticipate and address potential objections that might arise during trial. A persona like “a cautious suburban juror” might highlight concerns about fairness, while “a juror with a distrust of corporations” could surface biases that shape public perception. These insights allow lawyers to refine opening statements and closing arguments to resonate with diverse jury pools, ensuring more persuasive advocacy.

In voir dire preparation, for example, lawyers can use these personas to simulate conversations with hypothetical jurors, generating insights into biases, preconceptions, or attitudes that might otherwise be missed and that could be used to prepare follow-up questions. Adopting a persona like Athena—embodying wisdom and fairness—might inspire new frameworks for selecting or challenging jurors based on principles beyond conventional demographic profiling. These personas also offer invaluable support in appellate argument preparation. By assigning an AI the persona of a federal judge or even specific judges on the appellate panel, lawyers can simulate the types of questions likely to arise during oral argument. Justice Elena Kagan’s persona might focus on sharp doctrinal questions, while a “devil’s advocate” persona could raise unexpected counterarguments designed to probe weaknesses in the brief. This approach enables lawyers to anticipate the most challenging questions and prepare thoughtful, well-structured responses in advance.

With respect to brief writing, the use of these personas enhances the editing cycle. Assigning an AI the persona of a “law professor,” “a seasoned litigator,” or a “federal judge” might yield technical critiques focused on clarity, structure, or consistency with precedent. Meanwhile, a persona like Abraham Lincoln could offer practical, narrative-driven feedback, encouraging lawyers to incorporate storytelling elements that make arguments more compelling.

Such critiques aren’t limited to technical legalities—they provoke creative rethinking of strategy, tone, or framing to align better with the needs of a judge or client. Engaging with these contrasting perspectives sharpens the strategy by helping lawyers anticipate opposing counsel’s approach and prepare for contingencies. When properly used, these AI strategies provide a means for critical engagement, allowing lawyers to pressure-test arguments, uncover blind spots, and refine strategies. Much like the brainstorming sessions of seasoned trial teams, personas unlock creative and tactical thinking, leading to better advocacy, sharper briefs, and more effective appellate arguments in court.

Conclusion

No matter how creative AI may be, it is ultimately a tool. Its value depends on human oversight—lawyers who can interpret its insights, correct its mistakes, and decide which ideas are worth pursuing. Professor Ethan Mollick—associate professor at the Wharton School, co-director of the Generative AI Labs at the University of Pennsylvania, and the author of Co-Intelligence—advocates for an even more robust adoption of AI. He refers to a “cyborg” future, one in which AI and humans enhance each other’s unique capabilities. Just as human creativity involves leaps, intuition, and imagination, AI’s hallucinations are a kind of creative misinterpretation, an almost alien intelligence. These moments of divergence allow human users to fill in the gaps, critique the hallucinations, and use them as starting points for deeper insight. In this way, hallucinations are not just errors—they are raw material for human creativity. When embraced with the right mindset, they help create something new, rather than simply verifying existing knowledge. In law, this means using AI not just as a tool for research or document generation but also as a partner in creativity, strategy, and decision-making.

Over the next 50 years, the role of AI in law will undoubtedly grow. As lawyers, we will need to stop overestimating the risks, embrace the creative possibilities, and recognize that AI is a powerful tool—not a replacement for lawyers, but an augmentation of our skill. And if we can do that, the future of AI in law might be far more exciting than we ever imagined. With the right balance, AI can enhance legal practice without replacing thoughtful, nuanced judgment that defines great lawyering. In 50 years, the lawyers who embraced AI today will be the ones telling stories—how they wrestled with the quirks of their tools and came out the better for it.

Mark Twain also said, “The secret of getting ahead is getting started.” As we stand on the brink of a new era in legal practice, it is time to get started with AI, embracing its potential to revolutionize our work. The lawyers who embrace it early will be at the forefront of that transformation.

    Author