chevron-down Created with Sketch Beta.

The Business Lawyer

Winter 2024-2025 | Volume 80, Issue 1

For Two Years, We Have Been Telling You That AI Would Soon Be Regulated, and Now, It Will

Michael Scott Simon and Andrew Pery

Summary

  • The business benefits of AI are compelling, but equal weight must be given to safeguarding the privacy, security, and socio-economic interests  of consumers.
  • The momentum for AI regulation is accelerating.
  • The EU AIA is a comprehensive legislative framework designed to regulate AI technologies across the EU that was adopted in 2024.
  • The Court of Justice of the EU gave new life to Article 22 of the GDPR.
For Two Years, We Have Been Telling You That AI Would Soon Be Regulated, and Now, It Will
iStock.com/jakkapan sapmuangphan

Jump to:

I. Introduction

In 2023, the world witnessed an unprecedented surge in the development and application of Generative Artificial Intelligence (“GenAI”), marking it as the defining technology of the year. For example, ChatGPT reached 100 million monthly active users within three months of its launch, setting a record as the fastest-growing consumer application in history.

Advanced algorithms and “Foundational Models,” such as GPT-4, became integral tools for generating human-like text, images, and even music, pushing the boundaries of what AI could achieve. As GenAI continues to evolve, it sparks excitement and debate—and concern, because of its transformative impact on society. We even used it to help us write this article.

While the business benefits of AI are compelling, equal weight must be given to safeguarding the privacy, security, and socio-economic interests of consumers. Legitimate concerns have been raised about the trustworthiness of large language model AI systems (“LLMs”) like ChatGPT. Open AI’s CEO Sam Altman admitted that the company’s ChatGPT has “shortcomings around bias.” But bias is not the only potential problem with LLMs, as copyright infringement issues in sourcing the training data and incorrect but overconfident “hallucinations” also show.

Thus, it should come as no surprise that the momentum for AI regulation is accelerating. We will begin our coverage of 2023–2024 developments with the news that is at the forefront of these efforts: the long-anticipated approval by the European Union (“EU”) of the AI Act (“AIA”), which aims to set rigorous standards for high-risk AI systems, demanding transparency, accountability, and extensive documentation from companies. The EU AIA, like the GDPR before it, could leverage the well-known “Brussels Effect” to become the de facto world standard.

But the EU was not the only actor that finally took concrete action to put an end to the Wild, Wild West of AI. We will examine developments in the United States, starting on the federal side, where the Biden White House set the groundwork for concerted U.S. action through Executive Order 14110 (“EO 14110”). While neither a legislative statute nor an agency-promulgated regulation, EO 14110 creates a long and detailed set of requirements for federal agencies and contractors. EO 14110 put the National Institute of Standards and Technology (“NIST”) at the forefront of AI guidance in the United States. NIST has been hard at work, updating version 1.0 of its groundbreaking AI Risk Management Framework (“AI RMF”) and founding two new initiatives, the U.S. AI Safety Institute (“USAISI”) and the Assessing Risks and Impacts of AI (“ARIA”) Program.

While the federal government may be setting up the foundation for regulation, some U.S. states are going further and regulating AI. In the biggest surprise, Colorado passed an AI Act that, while not as comprehensive as the EU AIA, represents a critical first step toward creating new general regulations to govern AI. Utah passed an AI law as well, though it was far more limited and aimed largely at preventing chatbots from impersonating people. Tennessee passed the ELVIS Act (perhaps the single greatest “backronym” ever coined) to prevent a very specific form of impersonation: deceased celebrity likenesses. In a move that may launch a thousand puns, Georgia was even more focused, with a law regulating the use of AI in optometry. (We promise to stop the puns if you promise to keep reading.)

II. Drumroll Please! . . . At Last, the EU AIA Is Finally Ratified!

The EU AIA, officially adopted on May 21, 2024, is a comprehensive legislative framework designed to regulate AI technologies across the EU. The AIA, the first of its kind globally, aims to ensure that AI systems used within the EU are safe, transparent, and respect fundamental rights. The European Commission High-Level Expert Group on AI articulated its ambition for AI along three dimensions—AI systems should be lawful, robust, and ethical.

Initially proposed in 2021, ratification of the AIA took an extended period due to several complex factors. The AIA required negotiations among various stakeholders, including the European Commission, the European Parliament, and the Council of the European Union. Each of these bodies had differing views on key issues, such as the use of AI for biometric surveillance, definition and scope of high-risk AI, and governance structures. Furthermore, fast-paced development of AI technologies posed a challenge to creating a framework that remains relevant and effective. Lawmakers had to consider not only current AI capabilities but also anticipate future developments, which added layers of complexity to the legislative process.

The AIA establishes a prescriptive regulatory framework. Specific categories of AI systems are banned, while high-risk systems face rigorous compliance measures, including data governance, quality management, accuracy and cybersecurity, and human oversight. The AIA also addresses GenAI systems, ensuring robust governance, detailing the model’s training, testing processes, and evaluation of results.

The AIA establishes stringent ex ante conformity requirements to ensure the safety and accountability of high-risk AI systems before they enter the market. These requirements involve comprehensive conformity assessments, primarily conducted by the providers of the AI systems. A conformity assessment includes verifying compliance with quality management systems, examining technical documentation, and ensuring the design, development, and post-market monitoring of the performance of high-risk AI systems are in accordance with intended design objectives. Upon successful assessment, the provider must issue an EU declaration of conformity and affix a CE mark to the AI system, signifying compliance with the AIA standards.

Phased implementation of the AIA will allow stakeholders to adapt to the regulation while ensuring gradual enforcement. Although adopted in May 2024, the AIA was published in the Official Journal of the European Union in June 2024. Within twenty days of the official publication, the AIA will “enter into force.” Within six months, provisions relating to prohibited uses of AI systems will become effective. Within twelve months, provisions relating to governance of GenAI models will become enforceable. Finally, within thirty-six months, the remainder of the AIA will become effective.

In last year’s survey, we focused on the key provisions of the AIA and efforts to amend its provisions, with reference to refining the definition of AI, consistent with the OECD definition, broadening the scope of prohibited systems and addressing the increased adoption of GenAI systems. Below we set forth key amendments incorporated in the finalized AIA.

First, the compromise text incorporates an expanded list of prohibited AI systems that pose a potential threat to fundamental rights and democracy:

  • “[B]iometric categorisation systems that . . . deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation,”
  • “[F]acial recognition [systems that use] untargeted scraping of facial images from the internet or CCTV footage,”
  • Emotion recognition systems in the workplace and educational institutions,
  • Social scoring systems based on social behavior or personal characteristics,
  • Systems that manipulate human behavior in malicious ways, and
  • Systems that exploit the vulnerabilities of people materially distorting their behavior in potentially harmful ways.

Although the use of biometrics identification systems is generally prohibited, their use is permitted in specific narrowly defined situations, provided that safeguards are instituted and prior judicial authorization is secured.

Second, the final text augments obligations relating to high-risk AI systems that pose harm to health, safety, fundamental rights, the environment, and democracy. The specific categories of high-risk AI systems now include “AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda.” Moreover, any deployer of a high-risk AI system “shall perform an assessment of the impact on fundamental rights that the use of such system may produce.” A “fundamental rights impact assessment” requires a deployer of a high-risk AI system to undertake a rigorous process consisting of identifying, measuring, and monitoring potential foreseeable risks of harms prior to deploying that system. Once such assessment is completed, deployers must notify market surveillance authorities of the results of the impact analysis.

Third, the AIA was amended to address the potential risks of GenAI systems to the health, safety, and fundamental rights of natural persons. The AIA furthermore includes obligations on the part of GenAI providers to “comply with Union law on copyright and related rights,” as well as to report on their energy efficiency.

Chapter V of the AIA provides for enhanced obligations for providers, deployers, and users of GenAI systems that include transparency and disclosure requirements related to the data used for training, the algorithms involved, and the intended purpose of the models. This information must be made accessible to users, regulatory bodies, and other stakeholders; and regulated parties must implement measures to identify, mitigate, and prevent biases in their models. Such measures include regular audits and updates to the models to ensure they do not perpetuate or amplify societal biases. Providers must also report to regulatory authorities their findings and the steps taken to address biases, and they must establish and maintain robust risk management systems. Providers must conduct risk assessments to identify potential harms that the models could cause and implement measures to mitigate these risks. Regular monitoring and evaluation of the models’ performance and impact are mandatory.

III. The Return of GDPR as a Potential Regulator of AI

Even though Article 22 of the GDPR contains a clear blueprint for potential regulation of AI, the article was forgotten for years. That is, forgotten until December 7, 2023, when the Court of Justice of the EU (“CJEU”), in SCHUFA Holding AG, gave new life to Article 22 by holding that:

Article 22(1) of the GDPR must be interpreted as meaning that the automated establishment, by a credit information agency, of a probability value based on personal data relating to a person and concerning his or her ability to meet payment commitments in the future constitutes “automated individual decision-making” within the meaning of that provision, where a third party, to which that probability value is transmitted, draws strongly on that probability value to establish, implement or terminate a contractual relationship with that person.

The CJEU thus ended years of controversy over whether Article 22 presents an inherent right for all EU citizens or an invocable right that becomes applicable on demand. Even though the Data Protection Working Party held seven years ago that “Article 22(1) establishes a general prohibition for decision-making based solely on automated processing[, which] applies whether or not the data subject takes . . . action,” Article 22 was largely ignored. By holding that Article 22 is an inherent right, the CJEU brought Article 22 to the forefront at exactly the time that it is most needed, to provide a potential curb to abuses of AI and algorithmic decision-making prior to the effective dates of the AIA.

IV. United States

A. Federal

The dominant news from the U.S. government is that, on October 30, 2023, President Biden issued EO 14110, entitled Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order outlines a government-wide approach to addressing the challenges and opportunities presented by AI. EO 14110 establishes eight guiding principles and priorities:

  1. Ensuring that AI is safe and secure,
  2. Promoting responsible innovation, competition, and collaboration,
  3. Supporting American workers,
  4. Advancing equity and civil rights,
  5. Protecting Americans’ privacy,
  6. Protecting civil liberties,
  7. Managing risks from federal government’s use of AI, and
  8. Strengthening American leadership abroad.

Focusing on the principles that would likely be the most relevant for our readers, the first guiding principle of EO 14110 emphasizes that AI must be safe and secure. EO 14110 directs federal agencies to develop new standards, guidelines, and best practices for AI systems across various sectors. The order mandates robust, reliable, and standardized evaluations of AI systems, as well as policies and mechanisms to test, understand, and mitigate risks before deployment.

Promoting responsible innovation, competition, and collaboration is the second priority of EO 14110. The order calls for increased investment in AI research and development, as well as efforts to attract and retain AI talent within the United States. The NIST is tasked with a significant role in this area. NIST is directed to establish guidelines and best practices for developing and deploying safe, secure, and trustworthy AI systems, including creating companion resources to its AI RMF. NIST is also charged with launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities.

In guiding principles four and six, EO 14110 places strong emphasis on advancing equity and civil rights and protecting civil liberties in the context of AI development and use. The order directs federal agencies to ensure that AI systems comply with all applicable laws addressing unfair discrimination. It calls for the development of guidelines and best practices to prevent AI from disadvantaging protected categories, particularly in areas such as hiring, housing, and healthcare. The order also emphasizes the need for careful oversight, engagement with affected communities, and rigorous regulation to ensure that AI systems do not infringe upon civil liberties or exacerbate existing inequities.

Regarding guiding principle five, protecting Americans’ privacy, the order acknowledges that AI can make it easier to extract, re-identify, link, and infer sensitive personal information about individuals. To combat this risk, EO 14110 directs federal agencies to ensure that the collection, use, and retention of data is lawful, secure, and mitigates privacy and confidentiality risks. The order also promotes the use of privacy-enhancing technologies where appropriate to protect privacy and combat broader legal and societal risks resulting from the improper collection and use of personal data.

While EO 14110 only directly impacts federal government agencies, its effects are expected to extend far beyond the public sector. The order sets a precedent that will likely shape future regulations governing AI use in private businesses. Although, as with previous years of our reporting on AI law and governance, no comprehensive federal legislation regulating AI in the private sector is imminent, the standards and practices established by EO 14110 are poised to become de facto benchmarks for responsible AI development and deployment across industries. The NIST guidelines and best practices, as we discuss below, have already begun to influence how private entities approach AI governance.

NIST updated the AI RMF to version 2.0, which is currently in draft and out for public comment. AI RMF 2.0 will include more detailed categories and subcategories for each of the four core AI RMF functions—Govern, Map, Measure, and Manage—to provide more specific guidance for organizations to implement AI risk management practices. Version 2.0 also places greater emphasis on stakeholder engagement and diversity in decision-making throughout the AI lifecycle.

NIST drafted a companion resource to the AI RMF to address GenAI. The NIST published for public comment Generative AI Profile (“GAI Profile”), which introduces 33 new subcategories with 317 specific actions to address the unique risks associated with GenAI systems. The GAI Profile emphasizes enhanced human oversight, broader stakeholder engagement, and more comprehensive risk management practices throughout the AI lifecycle. It introduces new considerations for third-party GenAI tools, highlights the limitations of current pre-deployment testing methods, and underscores the importance of structured public feedback and incident disclosure. The GAI Profile also addresses the challenges of content provenance in the era of GenAI content, providing a more nuanced approach to managing the complex risks posed by GenAI technologies.

What might be most impressive about these NIST initiatives is how NIST directly incorporated private industry and public interest feedback into the process. For these AI RMF updates, NIST actively solicited comments from organizations dedicated to AI trustworthiness and created dedicated Slack channels moderated by experts from NIST, outside AI experts, and law professors.

NIST requested similar private- and public-interest participation in its subsequent efforts, the first of which is the USAISI. Established under EO 14110, the USAISI aims to address the challenges posed by AI’s increasing capabilities and contexts of use. Its primary focus is to advance the science, practice, and adoption of AI safety across various risk spectrums, including national security, public safety, and individual rights. The institute’s work includes conducting safety evaluations of AI models and systems, developing guidelines for evaluations and risk mitigations, and advancing research and measurement science for AI safety.

USAISI’s goals extend beyond research to practical applications, as it will facilitate the development of safety, security, and testing standards for AI models, as well as standards for authenticating GenAI content. USAISI is collaborating with partners in academia, industry, and government, both domestically and internationally, using the same channels as NIST has successfully leveraged for the AI RMF updates.

Finally, NIST launched the ARIA Program in May 2024 to advance the science, practice, and adoption of AI safety across various risk spectrums. The ARIA Program focuses on assessing AI models and systems submitted by technology developers worldwide, using a three-level evaluation: 1. model testing, 2. red-teaming, and 3. field testing. “The initial evaluation (ARIA 0.1) will . . . focus on risks and impacts associated with . . . LLMs,” with the goal of developing “guidelines, tools, methodologies, and metrics that organizations can use for evaluating their systems and informing decision making regarding positive or negative impacts of AI deployment.”

B. State Actions

While there was a great deal of foundation-setting by the federal government, the actual activity of governing AI took place at the state level. Surprisingly, the state that can claim to be first in regulating AI is not the high-tech center of California, but Utah.

1. Utah

Utah’s Artificial Intelligence Amendments, effective May 1, 2024, established the first regulatory framework for generative AI use in business operations. The law requires clear disclosures when generative AI is used in regulated occupations or when consumers specifically ask about AI use, emphasizing transparency in AI interactions. The legislation created an Office of Artificial Intelligence Policy and an AI Learning Laboratory Program to promote innovation while managing risks, offering potential regulatory mitigation for participating companies. Finally, as has been the trend for privacy laws, the Utah legislation also holds businesses accountable for AI-generated content under consumer protection laws, but does not provide for a private right of action.

2. Colorado

Going beyond the Utah legislation, the Colorado Artificial Intelligence Act (“Colorado AI Act”), signed into law on May 17, 2024, and set to take effect on February 1, 2026, marks a significant milestone as the first U.S. law to regulate artificial intelligence in a general sense. This groundbreaking legislation establishes a comprehensive framework for AI governance, focusing primarily on preventing algorithmic discrimination while also addressing broader AI-related concerns. The Colorado AI Act defines algorithmic discrimination as any condition in which the use of an AI system results in unlawful differential treatment or impact that disfavors individuals based on protected characteristics. However, the scope of the Act extends beyond bias prevention to encompass a wide range of AI governance principles. Central to the Act is the concept of “high-risk AI systems,” defined as AI systems that make or are a “substantial factor” in making any “consequential decision.” A “consequential decision” is one that has a material legal or similarly significant effect on the provision or denial to consumers of eight specific opportunities or services:

  1. Education enrollment or opportunity,
  2. Employment or employment opportunity,
  3. Financial or lending service,
  4. Essential government service,
  5. Healthcare services,
  6. Housing,
  7. Insurance, and
  8. Legal services.

The Act also introduces the concept of “substantial factor,” defined as a factor that assists in making a consequential decision, is capable of altering the outcome of a consequential decision, and is generated by an AI system.

The Colorado AI Act imposes distinct obligations on both developers and deployers of AI systems. “Developer” is defined as any person doing business in Colorado that develops or intentionally and substantially modifies an AI system. The obligations imposed on any developer of a high-risk AI system include:

  1. A duty of care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination,
  2. Provision of documentation to deployers about the AI system,
  3. Public disclosure of AI system types and risk management approaches, and
  4. Notification to the Colorado Attorney General and known deployers within ninety days of any newly discovered risks.

A “deployer,” defined as any person doing business in Colorado that deploys a high-risk AI system, faces more extensive obligations:

  1. A duty of care similar to that of developers,
  2. Implementation of a risk management policy and program,
  3. Completion of impact assessments for high-risk AI systems,
  4. Annual review of high-risk AI system deployments,
  5. Notification to consumers about the use of high-risk AI systems for consequential decisions,
  6. Provision of information to consumers about adverse consequential decisions,
  7. Public disclosure of deployed high-risk AI systems by type and its risk management approach for each type, and
  8. Notification to the Attorney General within ninety days of the discovery of algorithmic discrimination.

The Colorado AI Act contains exemptions on an industry level, for some aspects of healthcare, insurance, and banking, recognizes a number of federal exemptions, as well as general exemptions from compliance with law, cooperation with law enforcement, and certain research activities. The Act also excludes AI systems that only “perform a narrow procedural task,” or that “detect decision-making patterns or deviations [therefrom] and that is not intended to replace or influence a previously completed human assessment without sufficient human review.” Thus, the Act somewhat incorporates the human-in-the-loop concepts found in both the EU’s AIA and, surprisingly, the Colorado Privacy Act.

The Colorado AI Act exempts small businesses, which relieves certain deployers from some obligations if they meet specific criteria, such as having fewer than fifty full-time equivalent employees and not using their own data to train the AI system.

The Colorado AI Act does not provide for a private right of action. Enforcement is exclusively vested in the Colorado Attorney General, who can impose civil penalties of up to $20,000 per violation. The Act also provides an affirmative defense if a deployer discovers and cures a violation and is otherwise in compliance with recognized AI risk management frameworks.

Finally, the Act empowers but does not require the Colorado Attorney General to promulgate regulations. Given extensive regulations promulgated under the Colorado Privacy Act, given the regulations that govern the usage of AI under the Colorado Insurance Code, and given the complexity of the Colorado AI Act, it is highly likely that the Colorado Attorney General will exercise such rulemaking authority to provide more detailed guidance and requirements.

Beyond the general aspirational laws of Utah and Colorado, several states recently enacted far more specific statutes, including Tennessee and Georgia.

3. Tennessee

Tennessee’s Ensuring Likeness, Voice and Image Security (“ELVIS”) Act, which was signed into law on March 21, 2024, is the first legislation of its kind to directly address the commercial use of AI-generated deepfakes. The ELVIS Act expands existing protections of personal rights to include an individual’s voice and creates a private right of action against those who knowingly use or distribute unauthorized AI-generated content, as well as those who provide the technology for creating such content without permission. The ELVIS Act’s significance is underscored by its coverage in Rolling Stone magazine; country star Luke Bryan stated at the signing, “It’s hard to wrap your head around what is going on with AI, but I know the ELVIS Act will help protect our voices.”

4. Georgia

Georgia passed a law to provide a clear vision for the use of AI in optometry by defining an “assessment mechanism” as including “artificial intelligence devices and any equipment, electronic or nonelectronic, that are used to conduct an eye assessment.” Georgia’s law also specifies that such mechanisms must “collect the patient’s medical history, previous prescription information for corrective eyewear, and length of time since the patient’s most recent in-person eye health examination,” ensuring that AI does not lose sight of crucial patient information. (We promised “no more puns,” but we simply could not resist.)

V. Conclusion

Our prior surveys on AI law were admittedly largely focused on the looming potential for the regulation of AI. This year, the potential became reality. Even where the first steps are small, such as in Utah and even to some degree Colorado, we can now see that AI law and governance are here, and here to stay.

AI pundit and Wharton Professor Ethan Mollick likes to say that “Today’s AI is the worst AI you will ever use.” We’d like to add to that wisdom and say that today’s AI is also the least regulated that you will ever use.

    Authors