Generative artificial intelligence (GenAI) has great potential to help self-represented litigants trying to enforce their rights. But when consumers use these tools for assistance, will the providers, such as OpenAI, Google, or Anthropic, be guilty of the unauthorized practice of law (UPL)?
Statutes in all 50 states make it illegal to provide legal services without a license, and the penalties can be severe, ranging from fines to criminal charges. In a few states, UPL is a felony.
Regulations governing UPL are designed to protect consumers from unqualified or fraudulent legal advice, yet these same rules unintentionally exclude an estimated 80 percent of individuals from accessing legal assistance. The consequence is a legal system where many are left to fend for themselves, significantly disadvantaging those without representation. Enter artificial intelligence (AI), a powerful technology that holds the promise to at least help fill the gap, but also challenges the very foundations of UPL regulation. How should policymakers adapt UPL statutes to balance consumer protection with the need for scalable, accessible legal solutions?
Understanding UPL: Origins, Intent and Impact
It is easy to imagine that UPL regulations date from antiquity, or English common law, but in fact, they are a relatively new phenomenon. For more than 200 years in the United States, from colonial times until the late 1920s, it was generally considered legal for unlicensed practitioners to assist people in exercising their legal rights––activities that today would clearly be considered the practice of law.
Until the mid-20th century, the only activity prohibited for unlicensed individuals was in-court client representation. For most of American history, anyone could prepare filings before the court, fill out forms, draft legal documents, or do other tasks considered “administrative” without being admitted to practice law.
UPL laws are a relatively recent development in American jurisprudence. For much of the country’s history, legal assistance, including tasks like document preparation and form-filling, was open to unlicensed individuals. Only in-court representation required licensure. This openness changed dramatically during the Great Depression, as states introduced UPL statutes to protect consumers from fraudulent or incompetent providers. While the intent was laudable, the implementation has often been overbroad, restricting even competent assistance from allied legal professionals.
Although early American courts permitted many legal tasks to be performed without a law license, during the Great Depression in the late 1920s, virtually every state set up a committee to investigate UPL, and many passed regulations regarding unauthorized practice. These regulations were designed to protect clients from legal assistance that was incomplete, incompetent, negligent or fraudulent—both from lawyers practicing outside of the jurisdiction in which they were licensed and from people who were not licensed to practice law at all.
Today, UPL regulations are typically enforced in two ways: by state bars regulating licensed lawyers and through statutory controls on unlicensed individuals, including software companies. This article focuses on the latter category, exploring how AI tools fit—or don’t fit—within the framework of these rules.
The governance of unlicensed individuals has long been fraught with ambiguity. While the ABA has provided Model Rules for lawyer conduct, there is no uniformity in how states define or enforce UPL statutes. Even within states there is no consensus about what actions constitute “the practice of law.” Crucially, UPL violations do not require evidence of consumer harm, meaning these laws can stifle innovative legal solutions that might otherwise help millions.
The Justice Gap and the Role of AI
The American legal system faces a profound access-to-justice crisis. Millions of Americans encounter legal problems every year without adequate representation. Research highlights that those who proceed without legal support are overwhelmingly disadvantaged in court. This is particularly true in civil matters, where the stakes may include housing, debt or custody. Despite this need, the cost of legal services remains prohibitively high for most, and legal aid organizations are stretched thin.
GenAI tools, such as OpenAI’s GPT-4, Google’s Gemini and Anthropic’s Claude, offer the prospect of relief. Unlike specialized legal AI tools marketed to lawyers, foundation models are general-purpose systems accessible directly to consumers. These tools can draft documents, summarize case law and provide procedural guidance at a fraction of the cost of traditional legal services. They hold immense potential to democratize access to justice.
The use of off-the-shelf foundation models raises pressing regulatory questions. Can software “practice law?” Should individuals using AI tools, or the makers of AI tools, be considered in violation of UPL statutes? And how can regulators ensure that these tools provide accurate, reliable information without unintentionally excluding them from the market?
Foundation Models Versus Legal-Specific AI Tools
To address these questions, it is essential to distinguish between foundation models and fit-for-purpose legal AI tools. Legal-specific AI systems, often built using retrieval-augmented generation (RAG) techniques, are designed to support lawyers. Tools like Vincent AI from vLex or Co-Counsel by Thomson Reuters are marketed exclusively to licensed professionals, who remain responsible for the advice provided. These systems enhance productivity but do not fundamentally alter the lawyer-client dynamic.
Foundation models, by contrast, are accessible to anyone. These systems are not trained specifically for legal applications but can be prompted to perform legal tasks. This democratized access has significant implications for self-represented litigants, who may use these tools to navigate complex legal procedures. However, without the oversight of a licensed attorney, there is a risk of incomplete or inaccurate guidance. And unlike fit-for-purpose legal AI tools, which draw from specialized legal research databases, off-the-shelf foundation models create only statistical approximations of answers and often “hallucinate” credible sounding (but nonexistent) citations.