chevron-down Created with Sketch Beta.

Re-Regulating UPL in the Age of AI

Ed Walters

Summary 

  • AI tools offer self-represented litigants affordable legal guidance, but their use raises concerns about unauthorized practice of law (UPL).
  • Policymakers must adapt UPL statutes to ensure AI tools provide reliable legal support while safeguarding consumers from inaccurate or misleading guidance.
Re-Regulating UPL in the Age of AI
iStock.com/Andrey Danilovich

Jump to:

Generative artificial intelligence (GenAI) has great potential to help self-represented litigants trying to enforce their rights. But when consumers use these tools for assistance, will the providers, such as OpenAI, Google, or Anthropic, be guilty of the unauthorized practice of law (UPL)?

Statutes in all 50 states make it illegal to provide legal services without a license, and the penalties can be severe, ranging from fines to criminal charges. In a few states, UPL is a felony.

Regulations governing UPL are designed to protect consumers from unqualified or fraudulent legal advice, yet these same rules unintentionally exclude an estimated 80 percent of individuals from accessing legal assistance. The consequence is a legal system where many are left to fend for themselves, significantly disadvantaging those without representation. Enter artificial intelligence (AI), a powerful technology that holds the promise to at least help fill the gap, but also challenges the very foundations of UPL regulation. How should policymakers adapt UPL statutes to balance consumer protection with the need for scalable, accessible legal solutions?

Understanding UPL: Origins, Intent and Impact

It is easy to imagine that UPL regulations date from antiquity, or English common law, but in fact, they are a relatively new phenomenon. For more than 200 years in the United States, from colonial times until the late 1920s, it was generally considered legal for unlicensed practitioners to assist people in exercising their legal rights––activities that today would clearly be considered the practice of law.

Until the mid-20th century, the only activity prohibited for unlicensed individuals was in-court client representation. For most of American history, anyone could prepare filings before the court, fill out forms, draft legal documents, or do other tasks considered “administrative” without being admitted to practice law.

UPL laws are a relatively recent development in American jurisprudence. For much of the country’s history, legal assistance, including tasks like document preparation and form-filling, was open to unlicensed individuals. Only in-court representation required licensure. This openness changed dramatically during the Great Depression, as states introduced UPL statutes to protect consumers from fraudulent or incompetent providers. While the intent was laudable, the implementation has often been overbroad, restricting even competent assistance from allied legal professionals.

Although early American courts permitted many legal tasks to be performed without a law license, during the Great Depression in the late 1920s, virtually every state set up a committee to investigate UPL, and many passed regulations regarding unauthorized practice. These regulations were designed to protect clients from legal assistance that was incomplete, incompetent, negligent or fraudulent—both from lawyers practicing outside of the jurisdiction in which they were licensed and from people who were not licensed to practice law at all.

Today, UPL regulations are typically enforced in two ways: by state bars regulating licensed lawyers and through statutory controls on unlicensed individuals, including software companies. This article focuses on the latter category, exploring how AI tools fit—or don’t fit—within the framework of these rules.

The governance of unlicensed individuals has long been fraught with ambiguity. While the ABA has provided Model Rules for lawyer conduct, there is no uniformity in how states define or enforce UPL statutes. Even within states there is no consensus about what actions constitute “the practice of law.” Crucially, UPL violations do not require evidence of consumer harm, meaning these laws can stifle innovative legal solutions that might otherwise help millions.

The Justice Gap and the Role of AI

The American legal system faces a profound access-to-justice crisis. Millions of Americans encounter legal problems every year without adequate representation. Research highlights that those who proceed without legal support are overwhelmingly disadvantaged in court. This is particularly true in civil matters, where the stakes may include housing, debt or custody. Despite this need, the cost of legal services remains prohibitively high for most, and legal aid organizations are stretched thin.

GenAI tools, such as OpenAI’s GPT-4, Google’s Gemini and Anthropic’s Claude, offer the prospect of relief. Unlike specialized legal AI tools marketed to lawyers, foundation models are general-purpose systems accessible directly to consumers. These tools can draft documents, summarize case law and provide procedural guidance at a fraction of the cost of traditional legal services. They hold immense potential to democratize access to justice.

The use of off-the-shelf foundation models raises pressing regulatory questions. Can software “practice law?” Should individuals using AI tools, or the makers of AI tools, be considered in violation of UPL statutes? And how can regulators ensure that these tools provide accurate, reliable information without unintentionally excluding them from the market?

Foundation Models Versus Legal-Specific AI Tools

To address these questions, it is essential to distinguish between foundation models and fit-for-purpose legal AI tools. Legal-specific AI systems, often built using retrieval-augmented generation (RAG) techniques, are designed to support lawyers. Tools like Vincent AI from vLex or Co-Counsel by Thomson Reuters are marketed exclusively to licensed professionals, who remain responsible for the advice provided. These systems enhance productivity but do not fundamentally alter the lawyer-client dynamic.

Foundation models, by contrast, are accessible to anyone. These systems are not trained specifically for legal applications but can be prompted to perform legal tasks. This democratized access has significant implications for self-represented litigants, who may use these tools to navigate complex legal procedures. However, without the oversight of a licensed attorney, there is a risk of incomplete or inaccurate guidance. And unlike fit-for-purpose legal AI tools, which draw from specialized legal research databases, off-the-shelf foundation models create only statistical approximations of answers and often “hallucinate” credible sounding (but nonexistent) citations.

Challenges of Defining and Enforcing UPL in the AI Era

One of the most significant hurdles in regulating AI under UPL statutes is the lack of a clear definition of what constitutes the “practice of law.” Courts and bar associations have historically struggled with this question, often addressing it on a case-by-case basis. Activities like drafting legal documents or providing procedural guidance frequently fall into a gray area, making it difficult to apply UPL statutes consistently.

The advent of GenAI compounds this ambiguity. Tools like GPT-4 can perform tasks that were once the exclusive domain of lawyers, such as drafting motions or analyzing case law. While these capabilities can empower consumers, they also blur the line between permissible assistance and unauthorized practice.

Enforcement presents another challenge. UPL statutes are rarely invoked against software providers, partly because these tools have historically been limited to administrative tasks like form-filling. As AI becomes more sophisticated, however, regulators may feel compelled to act. Yet, enforcing UPL laws against widely used AI tools would require significant resources and could discourage innovation in legal technology.

Balancing Consumer Protection and Innovation

If the goal of UPL statutes is to protect consumers, then we must ask whether a blanket prohibition on AI assistance serves that goal, in the presence of a yawning gap in access to justice. On the other hand, if general-purpose AI tools hallucinate, we must also act to protect consumers from incomplete, negligent or even fraudulent advice provided by chatbots not built for the task. Especially because anyone may bring UPL complaints––even in the absence of consumer harm, UPL statutes are a one-size-fits-all solution to a more sophisticated question.

UPL statutes often fail to address the broader consumer protection ecosystem. Consumers harmed by faulty legal AI tools already have recourse through laws governing negligence, fraud and false advertising. These remedies provide a more targeted approach than blanket prohibitions, allowing for innovation while still holding providers accountable.

One straightforward step is to require AI tools to include clear disclaimers. For example, tools should explicitly state that they are not lawyers, are not providing legal advice and do not establish an attorney-client relationship. Tools advertising “robot lawyers” should rightly be regulated as false advertising. This transparency can help consumers make informed decisions about the limitations of technology. AI providers should be held accountable under existing legal frameworks for negligence, fraud or false advertising. For instance, if an AI tool misrepresents its capabilities or provides incorrect advice that leads to harm, affected consumers should have legal recourse.

Requiring general-purpose AI providers to carry a minimum amount of liability insurance or errors and omissions liability insurance could further protect consumers. This approach mirrors the regulation of autonomous vehicles, where companies must maintain insurance to cover potential harm. By ensuring that providers can compensate consumers for errors, insurance requirements can promote both accountability and innovation.

Regulators should also incentivize the development of ethical, reliable AI tools. This could include establishing minimum quality standards or providing certifications for tools that meet specific benchmarks for accuracy and safety. By distinguishing between responsible and irresponsible providers, regulators can foster a more trustworthy marketplace.

Overly strict enforcement of UPL statutes risks stifling innovation and exacerbating the access-to-justice crisis. The recent Upsolve case illustrates this danger. The court found that prohibiting the organization’s AI-driven tools under New York’s UPL laws would violate First Amendment protections. This decision underscores the need for a balanced approach that considers both consumer protection and constitutional rights.

Toward a More Inclusive Legal System

GenAI represents an interesting opportunity to provide legal services at scale––if it can offer those services responsibly and ethically. It offers unprecedented opportunities to address the justice gap, but it also challenges traditional regulatory frameworks. UPL statutes, designed for a pre-digital era, must evolve to reflect this new reality.

By focusing on consumer outcomes, regulators can create a system that protects individuals without stifling innovation. This includes supporting responsible AI providers, encouraging transparency and leveraging existing legal remedies to address misconduct. Such an approach would allow AI to supplement traditional legal services, expanding access to justice for millions.

The rise of GenAI is both an opportunity and a challenge for the legal profession. While these tools are not a panacea, they offer a scalable way to provide legal assistance to those who would otherwise go without. Striking the right balance in UPL regulation is essential to ensure that technology serves the public good.

In this new era, regulators must reimagine how they approach UPL, prioritizing consumer protection without hindering innovation. By embracing AI as a partner in closing the justice gap, we can move toward a more equitable and accessible legal system.

    Author