IV. United States
A. Federal
The dominant news from the U.S. government is that, on October 30, 2023, President Biden issued EO 14110, entitled Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order outlines a government-wide approach to addressing the challenges and opportunities presented by AI. EO 14110 establishes eight guiding principles and priorities:
- Ensuring that AI is safe and secure,
- Promoting responsible innovation, competition, and collaboration,
- Supporting American workers,
- Advancing equity and civil rights,
- Protecting Americans’ privacy,
- Protecting civil liberties,
- Managing risks from federal government’s use of AI, and
- Strengthening American leadership abroad.
Focusing on the principles that would likely be the most relevant for our readers, the first guiding principle of EO 14110 emphasizes that AI must be safe and secure. EO 14110 directs federal agencies to develop new standards, guidelines, and best practices for AI systems across various sectors. The order mandates robust, reliable, and standardized evaluations of AI systems, as well as policies and mechanisms to test, understand, and mitigate risks before deployment.
Promoting responsible innovation, competition, and collaboration is the second priority of EO 14110. The order calls for increased investment in AI research and development, as well as efforts to attract and retain AI talent within the United States. The NIST is tasked with a significant role in this area. NIST is directed to establish guidelines and best practices for developing and deploying safe, secure, and trustworthy AI systems, including creating companion resources to its AI RMF. NIST is also charged with launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities.
In guiding principles four and six, EO 14110 places strong emphasis on advancing equity and civil rights and protecting civil liberties in the context of AI development and use. The order directs federal agencies to ensure that AI systems comply with all applicable laws addressing unfair discrimination. It calls for the development of guidelines and best practices to prevent AI from disadvantaging protected categories, particularly in areas such as hiring, housing, and healthcare. The order also emphasizes the need for careful oversight, engagement with affected communities, and rigorous regulation to ensure that AI systems do not infringe upon civil liberties or exacerbate existing inequities.
Regarding guiding principle five, protecting Americans’ privacy, the order acknowledges that AI can make it easier to extract, re-identify, link, and infer sensitive personal information about individuals. To combat this risk, EO 14110 directs federal agencies to ensure that the collection, use, and retention of data is lawful, secure, and mitigates privacy and confidentiality risks. The order also promotes the use of privacy-enhancing technologies where appropriate to protect privacy and combat broader legal and societal risks resulting from the improper collection and use of personal data.
While EO 14110 only directly impacts federal government agencies, its effects are expected to extend far beyond the public sector. The order sets a precedent that will likely shape future regulations governing AI use in private businesses. Although, as with previous years of our reporting on AI law and governance, no comprehensive federal legislation regulating AI in the private sector is imminent, the standards and practices established by EO 14110 are poised to become de facto benchmarks for responsible AI development and deployment across industries. The NIST guidelines and best practices, as we discuss below, have already begun to influence how private entities approach AI governance.
NIST updated the AI RMF to version 2.0, which is currently in draft and out for public comment. AI RMF 2.0 will include more detailed categories and subcategories for each of the four core AI RMF functions—Govern, Map, Measure, and Manage—to provide more specific guidance for organizations to implement AI risk management practices. Version 2.0 also places greater emphasis on stakeholder engagement and diversity in decision-making throughout the AI lifecycle.
NIST drafted a companion resource to the AI RMF to address GenAI. The NIST published for public comment Generative AI Profile (“GAI Profile”), which introduces 33 new subcategories with 317 specific actions to address the unique risks associated with GenAI systems. The GAI Profile emphasizes enhanced human oversight, broader stakeholder engagement, and more comprehensive risk management practices throughout the AI lifecycle. It introduces new considerations for third-party GenAI tools, highlights the limitations of current pre-deployment testing methods, and underscores the importance of structured public feedback and incident disclosure. The GAI Profile also addresses the challenges of content provenance in the era of GenAI content, providing a more nuanced approach to managing the complex risks posed by GenAI technologies.
What might be most impressive about these NIST initiatives is how NIST directly incorporated private industry and public interest feedback into the process. For these AI RMF updates, NIST actively solicited comments from organizations dedicated to AI trustworthiness and created dedicated Slack channels moderated by experts from NIST, outside AI experts, and law professors.
NIST requested similar private- and public-interest participation in its subsequent efforts, the first of which is the USAISI. Established under EO 14110, the USAISI aims to address the challenges posed by AI’s increasing capabilities and contexts of use. Its primary focus is to advance the science, practice, and adoption of AI safety across various risk spectrums, including national security, public safety, and individual rights. The institute’s work includes conducting safety evaluations of AI models and systems, developing guidelines for evaluations and risk mitigations, and advancing research and measurement science for AI safety.
USAISI’s goals extend beyond research to practical applications, as it will facilitate the development of safety, security, and testing standards for AI models, as well as standards for authenticating GenAI content. USAISI is collaborating with partners in academia, industry, and government, both domestically and internationally, using the same channels as NIST has successfully leveraged for the AI RMF updates.
Finally, NIST launched the ARIA Program in May 2024 to advance the science, practice, and adoption of AI safety across various risk spectrums. The ARIA Program focuses on assessing AI models and systems submitted by technology developers worldwide, using a three-level evaluation: 1. model testing, 2. red-teaming, and 3. field testing. “The initial evaluation (ARIA 0.1) will . . . focus on risks and impacts associated with . . . LLMs,” with the goal of developing “guidelines, tools, methodologies, and metrics that organizations can use for evaluating their systems and informing decision making regarding positive or negative impacts of AI deployment.”
B. State Actions
While there was a great deal of foundation-setting by the federal government, the actual activity of governing AI took place at the state level. Surprisingly, the state that can claim to be first in regulating AI is not the high-tech center of California, but Utah.
1. Utah
Utah’s Artificial Intelligence Amendments, effective May 1, 2024, established the first regulatory framework for generative AI use in business operations. The law requires clear disclosures when generative AI is used in regulated occupations or when consumers specifically ask about AI use, emphasizing transparency in AI interactions. The legislation created an Office of Artificial Intelligence Policy and an AI Learning Laboratory Program to promote innovation while managing risks, offering potential regulatory mitigation for participating companies. Finally, as has been the trend for privacy laws, the Utah legislation also holds businesses accountable for AI-generated content under consumer protection laws, but does not provide for a private right of action.
2. Colorado
Going beyond the Utah legislation, the Colorado Artificial Intelligence Act (“Colorado AI Act”), signed into law on May 17, 2024, and set to take effect on February 1, 2026, marks a significant milestone as the first U.S. law to regulate artificial intelligence in a general sense. This groundbreaking legislation establishes a comprehensive framework for AI governance, focusing primarily on preventing algorithmic discrimination while also addressing broader AI-related concerns. The Colorado AI Act defines algorithmic discrimination as any condition in which the use of an AI system results in unlawful differential treatment or impact that disfavors individuals based on protected characteristics. However, the scope of the Act extends beyond bias prevention to encompass a wide range of AI governance principles. Central to the Act is the concept of “high-risk AI systems,” defined as AI systems that make or are a “substantial factor” in making any “consequential decision.” A “consequential decision” is one that has a material legal or similarly significant effect on the provision or denial to consumers of eight specific opportunities or services:
- Education enrollment or opportunity,
- Employment or employment opportunity,
- Financial or lending service,
- Essential government service,
- Healthcare services,
- Housing,
- Insurance, and
- Legal services.
The Act also introduces the concept of “substantial factor,” defined as a factor that assists in making a consequential decision, is capable of altering the outcome of a consequential decision, and is generated by an AI system.
The Colorado AI Act imposes distinct obligations on both developers and deployers of AI systems. “Developer” is defined as any person doing business in Colorado that develops or intentionally and substantially modifies an AI system. The obligations imposed on any developer of a high-risk AI system include:
- A duty of care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination,
- Provision of documentation to deployers about the AI system,
- Public disclosure of AI system types and risk management approaches, and
- Notification to the Colorado Attorney General and known deployers within ninety days of any newly discovered risks.
A “deployer,” defined as any person doing business in Colorado that deploys a high-risk AI system, faces more extensive obligations:
- A duty of care similar to that of developers,
- Implementation of a risk management policy and program,
- Completion of impact assessments for high-risk AI systems,
- Annual review of high-risk AI system deployments,
- Notification to consumers about the use of high-risk AI systems for consequential decisions,
- Provision of information to consumers about adverse consequential decisions,
- Public disclosure of deployed high-risk AI systems by type and its risk management approach for each type, and
- Notification to the Attorney General within ninety days of the discovery of algorithmic discrimination.
The Colorado AI Act contains exemptions on an industry level, for some aspects of healthcare, insurance, and banking, recognizes a number of federal exemptions, as well as general exemptions from compliance with law, cooperation with law enforcement, and certain research activities. The Act also excludes AI systems that only “perform a narrow procedural task,” or that “detect decision-making patterns or deviations [therefrom] and that is not intended to replace or influence a previously completed human assessment without sufficient human review.” Thus, the Act somewhat incorporates the human-in-the-loop concepts found in both the EU’s AIA and, surprisingly, the Colorado Privacy Act.
The Colorado AI Act exempts small businesses, which relieves certain deployers from some obligations if they meet specific criteria, such as having fewer than fifty full-time equivalent employees and not using their own data to train the AI system.
The Colorado AI Act does not provide for a private right of action. Enforcement is exclusively vested in the Colorado Attorney General, who can impose civil penalties of up to $20,000 per violation. The Act also provides an affirmative defense if a deployer discovers and cures a violation and is otherwise in compliance with recognized AI risk management frameworks.
Finally, the Act empowers but does not require the Colorado Attorney General to promulgate regulations. Given extensive regulations promulgated under the Colorado Privacy Act, given the regulations that govern the usage of AI under the Colorado Insurance Code, and given the complexity of the Colorado AI Act, it is highly likely that the Colorado Attorney General will exercise such rulemaking authority to provide more detailed guidance and requirements.
Beyond the general aspirational laws of Utah and Colorado, several states recently enacted far more specific statutes, including Tennessee and Georgia.
3. Tennessee
Tennessee’s Ensuring Likeness, Voice and Image Security (“ELVIS”) Act, which was signed into law on March 21, 2024, is the first legislation of its kind to directly address the commercial use of AI-generated deepfakes. The ELVIS Act expands existing protections of personal rights to include an individual’s voice and creates a private right of action against those who knowingly use or distribute unauthorized AI-generated content, as well as those who provide the technology for creating such content without permission. The ELVIS Act’s significance is underscored by its coverage in Rolling Stone magazine; country star Luke Bryan stated at the signing, “It’s hard to wrap your head around what is going on with AI, but I know the ELVIS Act will help protect our voices.”
4. Georgia
Georgia passed a law to provide a clear vision for the use of AI in optometry by defining an “assessment mechanism” as including “artificial intelligence devices and any equipment, electronic or nonelectronic, that are used to conduct an eye assessment.” Georgia’s law also specifies that such mechanisms must “collect the patient’s medical history, previous prescription information for corrective eyewear, and length of time since the patient’s most recent in-person eye health examination,” ensuring that AI does not lose sight of crucial patient information. (We promised “no more puns,” but we simply could not resist.)
V. Conclusion
Our prior surveys on AI law were admittedly largely focused on the looming potential for the regulation of AI. This year, the potential became reality. Even where the first steps are small, such as in Utah and even to some degree Colorado, we can now see that AI law and governance are here, and here to stay.
AI pundit and Wharton Professor Ethan Mollick likes to say that “Today’s AI is the worst AI you will ever use.” We’d like to add to that wisdom and say that today’s AI is also the least regulated that you will ever use.