Artificial intelligence (AI) and the major question doctrine (MQD) are on a collision course. The MQD holds that executive agencies cannot undertake major political or economic policy initiatives without clear congressional authorization. A great deal of executive AI regulation will be politically and economically significant. Moreover, for many AI regulations, judges will not find clear congressional authorizations in statutes written long ago, or even not-so-long ago. Of course, these are generalizations; not all executive AI regulations will be major, and some might be clearly authorized. Through an MQD lens, however, executive AI policy actions of any significance will invariably be scrutinized, and many—I suspect—will be judicially enjoined or invalidated. The reason is simple: AI is very major. Rather than treat that as an endpoint, it is my starting point for a different question.
Is AI too major for the MQD?
One need not be an expert in AI to join the conversation. Whether a fan or foe of the MQD, don’t sit this one out. There is far too much at stake, and something may need to give. The consequences will be felt before the Supreme Court weighs in. Decisions about AI that are made—or not made—in the near term may set the globe on trajectories and path dependencies that could shape society for lifetimes. That is why the United States is urgently trying to maintain a leadership role: the alternatives (we are told) might be dire for Western values, national security, democracy, and human autonomy—all in one scoop. If this all seems too much, that is the point. See, e.g., Nat’l Sec. Comm’n on A.I., Final Report (2021) (“Americans have not yet grappled with just how profoundly the [AI] revolution will impact our economy, national security, and welfare.”).
This article is part preview, part wake-up call. These moments in history do not come often.
The “Rise and Rise” of AI
AI represents not just another technological innovation, but a paradigm shift. Over the past decade, AI’s integration into the economy and society was mostly masked by the technology’s complexity and opacity. That changed with the release of ChatGPT in late 2022, which thrust AI into the mainstream and ushered in a new era. These are still early days. But we are experiencing the “rise and rise” of AI at breakneck speed—far faster than anyone had anticipated a year ago. Not industry insiders, not the NSA, not the creators of ChatGPT. Nobody.
While AI offers immense potential for social good, it also harbors wide-ranging and potentially catastrophic risks. AI’s current impact is radically changing how nations, institutions, and individuals interact, experience, and perceive the world. There will be winners and losers, new social contracts, and power recalibrations. Tradeoffs will be required between individual rights and collective welfare, between innovation and regulation, between economic prosperity and social equity.
This context is crucial. It begins to explain why hundreds of federal and state legislative bills have been introduced in 2023, with many more expected in 2024. Moreover, the context enables connections across dozens of congressional hearings in 2023 on subjects ranging from national security to election integrity, privacy and intellectual property, licensing and liability, civil rights and government benefits, innovation and competition, finance and healthcare, cybersecurity, and the future of work.
Meanwhile, the Executive branch is revving all engines. President Biden’s Executive Order 14110, on “Safe, Secure, and Trustworthy Development and Use of [AI],” is just one of many AI initiatives undertaken by his administration. Yet its comprehensive scope is emblematic of AI’s far-reaching implications and the immense amount of work ahead. See Exec. Order No. 14110, 88 Fed. Reg. 75191 (Nov. 1, 2023). The Executive Order begins by recognizing AI’s “extraordinary potential for both promise and peril,” and then directs over 50 federal entities to engage in more than 100 specific actions that straddle public and private spheres, implicate every major market, and stretch across the globe. Id. at 75191.
The array of bills, legislative hearings, and White House initiatives may or may not yield meaningful regulation. But they speak volumes about AI’s unique technical attributes, vast social implications, and appreciation that our existing laws, institutions, and practices, may not be well suited for the age of AI. Government intervention is needed, but key questions remain: How much regulation, what type, when, and who decides?
The question of who decides is arguably the most important. Quite simply, who decides AI regulation will strongly influence what is regulated, how and when, for whose benefit, and under what constraints. This is standard fare for administrative law; questions of who decides are mainstays of institutional design and judicial doctrine. The rise of the MQD, however, has reopened and recast prior settlements.
AI in an MQD World
The MQD compels agencies to point to clear congressional authorization when they claim the power to make decisions of vast economic and political significance. See Biden v. Nebraska, 143 S. Ct. 2355 (2023); West Va. v. EPA, 142 S. Ct. 2587, 2609 (2022).
Like AI, the MQD is a disruptive innovation. It is on the rise. And it may have something to prove. The Court has yet to settle on the doctrine’s underlying rationale, and it has not provided much guidance on how it should be applied. See generally Nebraska, 143 S. Ct. 2355 (sparring judicial opinions); Daniel Deacon & Leah Litman, The New Major Questions Doctrine, 109 Va. L. Rev. 1009 (2023).
Doctrinal indeterminacy brings uncertainty, if not also subjectivity, to the MQD analysis. Uncertainty also stems from AI’s novelty. As a rapidly evolving technology, AI’s full societal impacts remain unknown. This makes judging the “majorness” of AI oversight difficult. But generalizations are possible based on what we already know. Taking the MQD on its own terms, the deck is already stacked against federal AI regulation.
First, the sheer breadth of AI issues on the legislative agenda is MQD relevant. Because the political landscape is blanketed with AI bills, challengers will have an easy time pointing to legislative proposals that overlap with virtually any AI regulation pursued by a federal agency. This provides a ready-made basis to argue that the agency is attempting to aggrandize legislative power, which is precisely what the MQD is concerned with. See, e.g., Nebraska, 143 S. Ct. at 2373 (holding that the Biden administration was not authorized to cancel student debt under the HEROES Act, and relying in part on related bills that had been presented to Congress); West Va., 142 S. Ct. at 2609 (stating that the MQD addresses a “recurring problem” of “agencies asserting highly consequential power beyond what Congress could reasonably be understood to have granted”).
It is concerning that political thresholds can be manufactured for MQD purposes, through good lobbying, good lawyering, and judges willing to play along. But the MQD is not driving the political fervor around AI. Both quantitively and qualitatively, the legislative agenda is blanketed with AI-related proposals because huge swaths of existing law need reevaluation.
It is undoubtedly true that some existing federal laws can fairly be read to apply to AI technologies and use cases. But that says nothing about whether they apply in ways that address AI’s unique opportunities or challenges, much less in ways that are politically, economically, socially, or morally desirable. Mismatches between existing law and AI not only create risks that could go unaddressed but also opportunities that may be lost. Driverless cars, for example, will not be permitted in jurisdictions that require a licensed driver. Potentially life-saving AI systems might not be possible under existing privacy laws that silo government data. In short, the key point here is that AI’s disruptions throughout society also disrupt our existing laws. The legislative record merely reflects that.