chevron-down Created with Sketch Beta.

Administrative & Regulatory Law News

Fall 2023 — Ready or Not, Here Comes AI

AI & MQD

David S. Rubenstein

Summary

  • President Biden’s Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is just one of many AI initiatives undertaken by his administration.
  • In the abstract, the MQD shifts power away from the executive branch toward Congress.
  • AI is arguably different than other major executive actions because it hits national security, foreign affairs, most of the entire workforce, civil and human rights, and more as one chord.
AI & MQD
riveryao via Getty Images

Jump to:

Artificial intelligence (AI) and the major question doctrine (MQD) are on a collision course. The MQD holds that executive agencies cannot undertake major political or economic policy initiatives without clear congressional authorization. A great deal of executive AI regulation will be politically and economically significant. Moreover, for many AI regulations, judges will not find clear congressional authorizations in statutes written long ago, or even not-so-long ago. Of course, these are generalizations; not all executive AI regulations will be major, and some might be clearly authorized. Through an MQD lens, however, executive AI policy actions of any significance will invariably be scrutinized, and many—I suspect—will be judicially enjoined or invalidated. The reason is simple: AI is very major. Rather than treat that as an endpoint, it is my starting point for a different question.

Is AI too major for the MQD?

One need not be an expert in AI to join the conversation. Whether a fan or foe of the MQD, don’t sit this one out. There is far too much at stake, and something may need to give. The consequences will be felt before the Supreme Court weighs in. Decisions about AI that are made—or not made—in the near term may set the globe on trajectories and path dependencies that could shape society for lifetimes. That is why the United States is urgently trying to maintain a leadership role: the alternatives (we are told) might be dire for Western values, national security, democracy, and human autonomy—all in one scoop. If this all seems too much, that is the point. See, e.g., Nat’l Sec. Comm’n on A.I., Final Report (2021) (“Americans have not yet grappled with just how profoundly the [AI] revolution will impact our economy, national security, and welfare.”).

This article is part preview, part wake-up call. These moments in history do not come often.

The “Rise and Rise” of AI

AI represents not just another technological innovation, but a paradigm shift. Over the past decade, AI’s integration into the economy and society was mostly masked by the technology’s complexity and opacity. That changed with the release of ChatGPT in late 2022, which thrust AI into the mainstream and ushered in a new era. These are still early days. But we are experiencing the “rise and rise” of AI at breakneck speed—far faster than anyone had anticipated a year ago. Not industry insiders, not the NSA, not the creators of ChatGPT. Nobody.

While AI offers immense potential for social good, it also harbors wide-ranging and potentially catastrophic risks. AI’s current impact is radically changing how nations, institutions, and individuals interact, experience, and perceive the world. There will be winners and losers, new social contracts, and power recalibrations. Tradeoffs will be required between individual rights and collective welfare, between innovation and regulation, between economic prosperity and social equity.

This context is crucial. It begins to explain why hundreds of federal and state legislative bills have been introduced in 2023, with many more expected in 2024. Moreover, the context enables connections across dozens of congressional hearings in 2023 on subjects ranging from national security to election integrity, privacy and intellectual property, licensing and liability, civil rights and government benefits, innovation and competition, finance and healthcare, cybersecurity, and the future of work.

Meanwhile, the Executive branch is revving all engines. President Biden’s Executive Order 14110, on “Safe, Secure, and Trustworthy Development and Use of [AI],” is just one of many AI initiatives undertaken by his administration. Yet its comprehensive scope is emblematic of AI’s far-reaching implications and the immense amount of work ahead. See Exec. Order No. 14110, 88 Fed. Reg. 75191 (Nov. 1, 2023). The Executive Order begins by recognizing AI’s “extraordinary potential for both promise and peril,” and then directs over 50 federal entities to engage in more than 100 specific actions that straddle public and private spheres, implicate every major market, and stretch across the globe. Id. at 75191.

The array of bills, legislative hearings, and White House initiatives may or may not yield meaningful regulation. But they speak volumes about AI’s unique technical attributes, vast social implications, and appreciation that our existing laws, institutions, and practices, may not be well suited for the age of AI. Government intervention is needed, but key questions remain: How much regulation, what type, when, and who decides?

The question of who decides is arguably the most important. Quite simply, who decides AI regulation will strongly influence what is regulated, how and when, for whose benefit, and under what constraints. This is standard fare for administrative law; questions of who decides are mainstays of institutional design and judicial doctrine. The rise of the MQD, however, has reopened and recast prior settlements.

AI in an MQD World

The MQD compels agencies to point to clear congressional authorization when they claim the power to make decisions of vast economic and political significance. See Biden v. Nebraska, 143 S. Ct. 2355 (2023); West Va. v. EPA, 142 S. Ct. 2587, 2609 (2022).

Like AI, the MQD is a disruptive innovation. It is on the rise. And it may have something to prove. The Court has yet to settle on the doctrine’s underlying rationale, and it has not provided much guidance on how it should be applied. See generally Nebraska, 143 S. Ct. 2355 (sparring judicial opinions); Daniel Deacon & Leah Litman, The New Major Questions Doctrine, 109 Va. L. Rev. 1009 (2023).

Doctrinal indeterminacy brings uncertainty, if not also subjectivity, to the MQD analysis. Uncertainty also stems from AI’s novelty. As a rapidly evolving technology, AI’s full societal impacts remain unknown. This makes judging the “majorness” of AI oversight difficult. But generalizations are possible based on what we already know. Taking the MQD on its own terms, the deck is already stacked against federal AI regulation.

First, the sheer breadth of AI issues on the legislative agenda is MQD relevant. Because the political landscape is blanketed with AI bills, challengers will have an easy time pointing to legislative proposals that overlap with virtually any AI regulation pursued by a federal agency. This provides a ready-made basis to argue that the agency is attempting to aggrandize legislative power, which is precisely what the MQD is concerned with. See, e.g., Nebraska, 143 S. Ct. at 2373 (holding that the Biden administration was not authorized to cancel student debt under the HEROES Act, and relying in part on related bills that had been presented to Congress); West Va., 142 S. Ct. at 2609 (stating that the MQD addresses a “recurring problem” of “agencies asserting highly consequential power beyond what Congress could reasonably be understood to have granted”).

It is concerning that political thresholds can be manufactured for MQD purposes, through good lobbying, good lawyering, and judges willing to play along. But the MQD is not driving the political fervor around AI. Both quantitively and qualitatively, the legislative agenda is blanketed with AI-related proposals because huge swaths of existing law need reevaluation.

It is undoubtedly true that some existing federal laws can fairly be read to apply to AI technologies and use cases. But that says nothing about whether they apply in ways that address AI’s unique opportunities or challenges, much less in ways that are politically, economically, socially, or morally desirable. Mismatches between existing law and AI not only create risks that could go unaddressed but also opportunities that may be lost. Driverless cars, for example, will not be permitted in jurisdictions that require a licensed driver. Potentially life-saving AI systems might not be possible under existing privacy laws that silo government data. In short, the key point here is that AI’s disruptions throughout society also disrupt our existing laws. The legislative record merely reflects that.

Second, AI regulation will surely have a major economic impact. In sectoral silos, the impacts may or may not be economically major enough to trigger MQD scrutiny. However, AI technologies are not beholden to sectors or siloes that were conceived and constructed in prior eras. Quite the contrary. AI’s regulation in one sector could have direct and indirect economic effects on many sectors. AI bridges markets and creates new ones. It operates in digital and physical space and is increasingly integrated into other products and services—sometimes at the foundation, sometimes as a feature, and sometimes both. Moreover, there is much hope and expectation for AI’s impact on the economy. Especially since the rise of generative AI and large language foundation models, more than $50 billion in venture capital has blown gusty winds into the economic sail.

Perhaps the economic ripples of AI regulation can be contained, below the MQD’s economic threshold. At present, however, the MQD has no built-in economic model to draw upon. Indeed, as emergent technology, economic modeling of any sort is likely to be a challenge. Thus, it must suffice to say that AI regulation might be economically significant or not for MQD purposes. It’s a problem not to know, and it’s one of the Court’s own making.

Finally, the MQD’s requirement of “clear congressional authorization” will rarely be met. Extant statutes do not contemplate AI’s social disruptions, much less in the context of today’s (and tomorrow’s) structural balance of power and negotiated principles. While current statutes might be interpreted fairly or reasonably to cover certain AI regulatory initiatives, the MQD demands statutory clarity and specificity for the agency action at issue. To find it, judges will need to squint with one eye and turn away the other.

Agencies will be hard-pressed to point to clear statutory authority in texts written for different technologies, different times, and for different challenges. In prior technological eras, Congress was able to legislate because it could broadly delegate. Indeed, Congress’s choice to delegate was generally most justified in areas of scientific uncertainty, safety and security, and market instability. Those days may be over. In the AI era, Congress may need to regulate more explicitly and specifically. While it’s not impossible, it is hard to imagine why Congress would even bother. AI’s rapid evolution, risks, and rewards are completely unpredictable. If Congress is required—or thinks it is required—to regulate only known knowns, then any such regulation would quickly go stale. Beyond stale, such regulation would undoubtedly be suboptimal. Rather than regulate in ways that Congress thinks best, it will be legislating for the MQD test. Rather than future-proof laws, we may get MQD-proof laws. If those are different things, then the MQD is more than a drafting hurdle. It also creates conditions that, when met, make legislative outcomes fall short.

Perhaps the MQD will not require the sort of clarity and specificity contemplated here. The problem, again, is Congress does not know. Congress needs that information now—while it is deliberating and drafting laws—not after the ink dries.

MQD in an AI World

In the abstract, the MQD shifts power away from the executive branch toward Congress. But if Congress does not decide AI policy for the nation, then who will? One answer is that courts will be deciding. See, e.g., Jody Freeman & Matthew Stephenson, The Anti-Democratic Major Questions, 2022 Sup. Ct. Rev. 1 (2023). Whether courts are deciding policy when they strike down executive policy is important, but it is not my focus here. Suffice it to say, the collective or cumulative effect will be the absence of federal AI regulation.

In the AI context, the absence of federal policy is a policy. Under that policy, other actors decide how to govern AI. Those other actors include the industry, which generally favors self-governance. In the short run, industry stakeholders will be more than pleased with MQD-inflected outcomes; they will insist on it. Yet they will soon be disappointed, however, as other governments rush to regulate AI development and use across the entire value chain.

To start, American states will set domestic policy. The result is likely to be a patchwork of conflicting state and local laws. In addition, foreign states will regulate AI. All the while, the U.S. government will not have a coherent AI policy, much less a stable policy, which might seriously compromise its standing as an AI leader on the global stage. The United States may (or may not) retain its innovation leadership, depending on how that is measured. China is surely a near-peer. But, in any event, innovation leadership is only part of the geopolitical package. See Jordan Schneider & Matthew Mittlesteadt, The Key to Winning the Global AI Race, Noema (Sept. 19, 2023).

Policy leadership also matters—a lot. In a global AI ecosystem, laws can have spillover effects by setting global benchmarks. That is why the EU is trying to be the first major jurisdiction to enact comprehensive AI laws. It came one giant step closer to claiming that mantle in December 2023. See Luca Bertuzzi, European Union Squares the Circle on the World’s First AI Rulebook, Euractiv (Dec. 9, 2023).

This geopolitical context makes the MQD blame game important, but not for the truth of the matter. Rather, the schisms in federal AI policy are consequential because they exist. Brussels and Beijing will look at the United States and see a highly unstable (and potentially volatile) regulatory landscape. If the MQD is a cog or contributing cause, that could be a problem for the court’s legitimacy, and for democracy. It could also be a problem for the future of AI—technologically, socially, and geopolitically.

***

The rise of AI has thrust it to the forefront of the political agenda. It thus joins ranks with other major questions. AI’s political boost owes to widespread public and bi-partisan appreciation that AI needs government regulation. Moth-like, however, the MQD is drawn to the political spotlight in ways that directly and indirectly distort the AI regulatory landscape—not just in the United States, but everywhere, in ways that could be lasting.

The MQD has stood up to strike down major executive actions relating to climate change, public health, and public fisc. The consequences in those cases are major too. But AI is arguably different because it hits all those notes and many more, including national security, foreign affairs, most of the entire workforce, civil and human rights, to name a few. What’s more—and this is key—AI hits all those notes as one chord.

Courts may try to unwind AI’s complexities—creating arbitrary categories to separate foreign affairs from domestic policy, or commerce from national security. These complexities are endemic to AI policy but illegible to the MQD. Few doctrines are absolute, and I suspect that the MQD will need to create exceptions or carve-outs. But if courts try to unwind AI, it may be the MQD that gets unwound. Don’t sit this one out. Whatever the outcomes, they will be major.