chevron-down Created with Sketch Beta.

The Brief

Winter 2025 | Cyber and Data Privacy Insurance

From Over the Pond: The European Union’s Comprehensive AI Legislation Comes to America

Sharon R Klein and Lisa M Campisi

Summary

  • The EU AI Act establishes a risk-based regulatory framework with four tiers, imposing greater oversight on AI systems deemed to be more impactful.
  • The Colorado AI Act adopted the EU AI Act structure and requires developers and deployers to protect consumers from algorithmic discrimination and report risks to the attorney general.
  • Organizations not subject to the EU or Colorado AI Act often adopt the NIST AI Risk Management Framework as a recognized compliance standard aligning with emerging AI regulations.
  • AI-related risks can trigger litigation under antidiscrimination laws and employment practices liability insurance.
From Over the Pond: The European Union’s Comprehensive AI Legislation Comes to America
E4C via Getty Images

Jump to:

This past August, the European Union Artificial Intelligence Act (EU AI Act) became effective in the European Union (EU). Proposed in 2021, the EU AI Act is the first comprehensive law of its kind. It follows a prescriptive, risk-based approach to regulation and provides a pragmatic legal structure for artificial intelligence (AI) systems. This legislation is and will continue to be an important milestone within the legal framework surrounding AI technology and is likely to significantly influence legislation here in the U.S. and worldwide.

In the U.S., there currently is no federal AI legislation. In October 2023, the Biden administration issued Executive Order 14110 on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (EO), which proposed an AI framework instructing federal agencies on the use of AI. Many of its requirements trickled down to the private sector through regulation by federal agencies such as the Federal Trade Commission (FTC), Food and Drug Administration (FDA), Department of Health and Human Services (HHS), and Department of Defense (DOD). Trump rescinded the Biden EO on January 20, 2025, day one of the Trump administration.

In May 2024, Colorado became the first U.S. state to enact its own comprehensive AI legislation when Colorado Governor Jared Polis signed into law SB 24-205 (Colorado AI Act). The Colorado AI Act adopts many regulatory concepts similar to the EU AI Act. This is only the first legal step surrounding AI regulation as a whole in the U.S. In the absence of federal legislation, and in addition to Colorado and Utah, which have enacted AI legislation, there have been at least 447 state proposals considered in 45 different state legislatures to regulate AI in the past year, notably in Connecticut and California. As more states are likely to adopt similar legislative models, the use, category, and level of risk factor will continue to spur heavy debate for both government officials and the growing number of businesses using AI technologies across the country, as well as throughout the world.

At the same time, many U.S. businesses seeking to develop AI compliance programs have developed policies, procedures, and governance in accordance with the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF). Like the EU AI Act and the Colorado AI Act, the RMF is focused on identifying and managing unique risks posed by the development or deployment of AI.

AI legislation and regulatory activity are likely to continue apace in 2025. This article examines the requirements of current comprehensive legislation and significant U.S. state proposals, reviews the RMF as a compliance tool, and outlines litigation and insurance considerations and certain best practices for companies to stay ahead in a rapidly changing legal environment.

The EU AI Act: Risk-Based Regulation with Teeth

The first draft of the EU AI Act was presented by the EU Commission nearly three years ago. Since that time, the use of AI systems (along with numerous AI models) by various entities has increased dramatically. Anticipating that the pace of adoption and technical advancement of AI systems would move faster than legislation could be adopted, the EU AI Act provides for a risk-based regulatory system for the development, testing, or deployment of AI organized into four distinct tiers, which align with the innate risks posed by the AI systems at hand. All entities’ compliance responsibilities are determined by the risk category associated with the given AI system. The EU AI Act’s risk-based approach ensures that AI systems used in more impactful contexts are subject to more enhanced oversight and regulatory authorization. Concurrently, the law also reduces the barrier to entry for entities seeking to implement low-impact AI systems. Regardless of the AI system being implemented, the first step for any entity should be to conduct and document an assessment to identify the potential risks and categorize the system in question.

The EU AI Act relies on the self-appraisal of AI systems by their manufacturers, providers, and deployers into selected risk categories. These categories include:

  • Unacceptable-risk AI: These are AI systems that pose an actual threat to individuals as well as their freedoms, such as AI systems used for cognitive behavioral manipulation, social scoring, and large-scale real-time tracking. The use of AI systems that fall into this category is strictly prohibited.
  • High-risk AI: This risk level of AI has the capacity to negatively affect the safety or fundamental rights of consumers, such as systems used in the context of mass transportation, health care, medical devices, children’s toys, management of critical infrastructure, employment, or law enforcement. The use of high-risk AI systems is subject to judicial or other independent body authorization along with transparency, security, and risk assessment obligations.
  • Limited-risk AI: These are AI systems where the primary risk to individuals comes from a lack of transparency regarding the use of the AI system, such as the use of chatbots. The use of limited-risk AI systems is generally permitted when fully disclosed to consumers.
  • Minimal or no-risk AI: This risk level includes AI systems that pose minimal risks (or no risks) to the rights and freedoms of individuals, such as AI-enabled video games or spam filters. It is expected that the vast majority of AI systems currently used in the EU fall into this category. The use of minimal or no-risk AI systems is generally permitted without enhanced restrictions.

In addition to these categories, there are independent requirements for “general purpose” AI (GPAI) such as ChatGPT; the use of GPAI may fall under any of the above risk categories. These additional GPAI requirements fall primarily on the providers of GPAI and include mandatory technical disclosures as well as applicable copyright protections.

The EU AI Act is similar to the General Data Protection Regulation (GDPR) that preceded it. The GDPR, which passed in May 2018, had an extraterritorial effect, governing the use of any AI technologies in the EU, even if operated by companies located outside the EU region. Potential fines for violations could become significant, reaching up to €35 million, or 7% of worldwide turnover. Just as the GDPR became a model for worldwide data protection laws, the EU AI Act is very likely to influence the global development of AI law.

Executive Order 14110

While the EO was rescinded by President Trump, it has been used as a model for federal agency guidelines and state initiatives. The EO sets forth a framework for the development, deployment, and regulation of AI. The EO focuses on: (1) establishing new standards for AI safety and security; (2) protecting Americans’ privacy; (3) advancing equity and civil rights; (4) protecting consumers, patients, passengers, and students; (5) supporting workers’ ability to bargain collectively and mitigate risks relating to workplace surveillance, bias, and job displacement; (6) promoting innovation and competition; (7) advancing American leadership abroad; and (8) ensuring responsible and effective government use of AI.

The effects of the EO were far-reaching and touched multiple industries and sectors. Most significant for companies and government contractors were the following actions stemming from the EO:

  • Required developers of the AI systems that pose a serious risk to national security, national economic security, or national public health and safety to share their safety test results and other critical information with the U.S. government.
  • Required NIST to develop standards to help ensure that AI systems are safe, secure, and trustworthy.
  • Required the Department of Commerce to develop guidance for content authentication and watermarking to clearly label AI-generated content so that federal agencies can use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and globally.
  • Established an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden administration’s ongoing AI Cyber Challenge.
  • Called on Congress to implement federal privacy legislation.
  • Required addressing algorithmic discrimination through training, technical assistance, and coordination between the DOJ and federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.
  • Directed the responsible use of AI in health care and the development of affordable and lifesaving drugs, and required creating resources to support educators deploying AI-enabled educational tools.
  • Required the development of principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement, labor standards, workplace equity, health and safety, and data collection.
  • Created programs and provided resources to enhance U.S. leadership in AI innovation and to promote competition.
  • Promoted U.S. leadership in coordinating global regulatory efforts.

Colorado AI Act: U.S.’s First Comprehensive AI Law

The Colorado AI Act adopted the EU AI Act structure of defining different risk standards. It applies to all “developers” and “deployers” of “high-risk artificial intelligence systems” that do business in Colorado and aims to protect all Colorado residents, including employees.

“Developers” are defined as companies doing business in Colorado that develop or intentionally and substantially modify an AI system. “Deployers” are businesses in Colorado that deploy a high-risk AI system. Compliance obligations under the law depend on a company’s status as a developer or deployer with respect to any high-risk AI system. Both developers and deployers are subject to compliance audits by the Colorado attorney general.

The Colorado AI Act requires developers and deployers to use reasonable care to protect consumers from any known or foreseeable risks of algorithmic discrimination arising from both the intended and contracted uses of high-risk AI systems. The Colorado AI Act also contains an AI incident reporting obligation, in which developers are required to report to the attorney general any known or reasonably foreseeable risks of algorithmic discrimination in connection with a high-risk AI system.

A deployer is required to report to the attorney general if it discovers that a high-risk AI system it deployed has caused algorithmic discrimination. In each case, reports must be made to the attorney general within 90 days of discovering such issues. Developers have the additional obligation of notifying all known deployers and other developers of the high-risk AI system in question. Businesses using high-risk AI systems are also subject to a number of transparency, disclosure, reporting, and other obligations depending on their role as either developers or deployers. Both developers and deployers must meet specific obligations required by the Colorado AI Act.

Developers are specifically obligated to:

  • Make available to the deployers or other developers of the high-risk AI system (1) a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system; (2) documentation disclosing, among other things, a high-level summary of the type of data used to train the high-risk AI system, the purpose of the high-risk AI system, and any other information necessary to allow the deployer to comply with its obligations; (3) documentation describing the data governance measures used to cover the training datasets and measures used to examine the suitability of data sources, possible biases, and appropriate mitigation, and the intended outputs of the high-risk AI system; and (4) documentation reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk AI system for risks of algorithmic discrimination.
  • Provide all information and documents to the deployers to assist them in completing impact assessments.
  • Provide a notice on their website or in a public use case inventory summarizing the types of high-risk AI systems developed or intentionally and substantially modified by the developer and how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the high-risk AI system.

Deployers are obligated to (with limited exceptions):

  • Implement, maintain, and regularly review and update a risk management policy and program that is used to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The Colorado AI Act references the RMF and other standards as tools that can be used to assess reasonableness. The size and complexity of the organization; the nature, scope, and intended use of the high-risk AI system; and the sensitivity of the data processed in connection with the high-risk AI system are also factors to consider when assessing whether a deployer’s risk management policy and program is reasonable.
  • Complete an impact assessment for each high-risk AI system deployed at least annually and within 90 days after any intentional and substantial modification to the high-risk AI system is made available. The impact assessment must be retained for three years following the final deployment of the high-risk AI system. The Colorado AI Act requires that specific information be provided in each impact assessment, including, among other things, (1) a statement disclosing the purpose, intended use cases, deployment context, and benefits of the high-risk AI system; (2) an analysis of whether the deployment of the high-risk AI system poses any known or reasonably foreseeable risks of algorithmic discrimination, along with mitigating steps taken; and (3) a description of the post-deployment monitoring and user safeguards, including the oversight, use, and learning process the deployer established to address issues resulting from the deployment of the high-risk AI system.
  • Review the deployment of each high-risk AI system at least annually to ensure that the high-risk AI system is not causing algorithmic discrimination.
  • Provide notice on their website to consumers that contains specific disclosures.
  • Provide consumers with the rights supplied by the Colorado Privacy Act and the right to appeal an adverse consequential decision.

Any violations of the Colorado AI Act constitute an unfair trade practice under the state’s law. However, the law does provide several affirmative defenses for violations, including complying with the latest RMF or another substantially equivalent nationally recognized risk management framework for AI systems or any risk management framework for AI systems that the Colorado attorney general chooses to designate. The Colorado AI Act does not provide for a private right of action and instead provides the Colorado attorney general with sole enforcement authority. The Colorado attorney general also has the authority to issue rules as necessary for the purpose of implementing and enforcing the act.

The Colorado AI Act will not go into effect until February 1, 2026. However, developers and deployers should consider preparing to comply with these obligations before they officially become law, as developing internal policies and procedures to comply will be a complex and time-consuming task.

Notable State Proposals

In the absence of federal legislation, many states continue to consider legislative proposals on AI governance. Notable among these are California and Connecticut.

California legislative proposals. At least 17 AI bills were before the California legislature and were passed in 2024. Proposals included content marking and disclosure requirements for covered providers of generative AI, prohibiting the use of AI tools in a manner that results in algorithmic discrimination in the employment context, and regulatory standards for the largest and most powerful AI models (SB 1047).

SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which passed in the California legislature but was ultimately vetoed by Governor Newsom, aimed to regulate the development and use of AI models that pose significant risks of causing or enabling critical harms to public safety and security. The bill defined covered models as AI models that are trained using a certain quantity and cost of computing power, or that are created by fine-tuning existing covered models using a certain quantity and cost of computing power.

The bill imposed various requirements on developers of covered models, such as requiring implementation of administrative, technical, and physical cybersecurity protections to prevent unauthorized access to, misuse of, or unsafe post-training modifications of a covered model; development and implementation of safety and security protocols, including testing procedures; and implementation of capabilities to promptly enact a full shutdown of the model. The bill also required developers to assess and report the risks of critical harms posed by covered models and covered model derivatives, and to refrain from using or making them available for commercial or public use if there is an unreasonable risk of critical harm. Furthermore, the bill required developers to annually retain a third-party auditor to conduct an independent audit of compliance with the bill’s requirements, and to submit a statement of compliance and a report of any AI safety incidents to the attorney general.

SB 1047 also sought to regulate providers of AI training infrastructure. The bill required persons who operate computing clusters, which are sets of machines that can be used for training AI, to implement written policies and procedures to obtain and verify the identity and purpose of customers who utilize compute resources that would be sufficient to train a covered model, and to implement the capability to promptly enact a full shutdown of any resources being used to train or operate models under the customer’s control.

SB 1047 would have created the Board of Frontier Models within the California Government Operations Agency, which would have been responsible under the law for approving the regulations and guidance issued by the agency to update the definition of a covered model, establish auditing requirements, and provide guidance for preventing unreasonable risks of critical harms.

In vetoing SB 1047, Governor Newsom criticized the bill as “a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities.” In explaining his reasoning for vetoing SB 1047, he stated that the bill failed to “take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data” and only regulated AI models based on their cost and size rather than function.

California Consumer Privacy Act regulations. While legislative proposals are being considered, the California Privacy Protection Agency, the California government agency tasked with enforcement of the California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act, the state’s comprehensive privacy law, continues its work on automated decision-making technology (ADMT) regulations. Once finalized, the ADMT regulations will provide an additional layer of AI regulation in California with respect to ADMT that processes personal information of California residents.

The proposed regulations define ADMT as any technology that processes personal information and uses computation to execute a decision, replace human decision-making, or substantially facilitate human decision-making. “Substantially facilitate human decision-making” means using the output of the technology as a key factor in a human’s decision-making, such as generating a score about a California resident that a human reviewer uses as a primary factor to make a significant decision about them. ADMT includes software or programs derived from machine learning, AI, statistics, or other data-processing techniques.

The proposed regulations apply to businesses that use ADMT for significant decisions concerning California residents or for extensive profiling. Significant decisions are those that result in access to, or the provision or denial of, financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment or independent contracting opportunities or compensation, health care services, or essential goods or services. The proposed regulations also apply to businesses that process personal information to train ADMT capable of being used for significant decisions concerning California residents, establishing individual identity, physical or biological identification or profiling, generating a deepfake, or operating generative models.

Under the proposed ADMT regulations, California residents are afforded the right to opt out of ADMT used for significant decisions or extensive profiling. Consistent with requirements for operationalizing other California resident rights under the CCPA, businesses must provide two or more designated methods for submitting requests to opt out of ADMT, which must be easy for California residents to execute and require minimal steps. Businesses are not required to provide the ability to opt out if ADMT is necessary for security, fraud prevention, or safety purposes, or if there is a method to appeal the decision to a qualified human reviewer with the authority to overturn the decision. California residents also have the right to access ADMT, which includes receiving plain language explanations of the purpose, output, and usage of ADMT, as well as a description of how the ADMT works with respect to the particular California resident.

Connecticut SB 2. Connecticut did not pass SB 2, its proposed comprehensive AI bill, in its last legislative session. However, the bill’s advancement in the Connecticut legislature was closely watched nationally and may influence legislative proposals in other states in 2025. The Connecticut bill followed a risk-based approach similar to the EU AI Act. It focused on the regulation of high-risk AI systems, defined as AI systems developed for making consequential decisions that have significant legal or similar effects on consumers. When defining consequential decisions, SB 2 listed similar areas of high risk as the Colorado AI Act, including the availability, cost, or terms of any criminal justice remedy, education enrollment or opportunity, employment or employment opportunity, essential good or service, financial or lending service, essential government service, health care service, housing, insurance, or legal service.

The bill would have imposed obligations on both developers and deployers of high-risk AI systems to protect consumers from algorithmic discrimination risks, defined as unjustified differential treatment by AI systems based on protected classifications, with some exceptions. Deployers would have been required to implement a risk management policy and program, complete impact assessments, review the deployment of high-risk AI systems annually, and notify consumers when high-risk AI systems were used for consequential decisions. Developers would have had to provide deployers with documentation detailing the intended uses, limitations, and risks of high-risk AI systems, as well as disclose any known risks of algorithmic discrimination to the deployers and the Connecticut attorney general.

Global AI Laws

The development of global AI law has lagged only slightly behind the EU and appears to be following a similar risk-based approach. The legal framework for many of these emerging global laws is the Organisation for Economic Co-operation and Development (OECD) AI Principles. These principles were first announced in 2019 and include recommendations for: (1) inclusive growth, sustainable development, and well-being; (2) respect for the rule of law, human rights, and democratic values, including fairness and privacy; (3) transparency and explainability; (4) robustness, security, and safety; and (5) accountability. The principles were updated again in 2024. Currently, 47 countries, including the U.S., have promised adherence to the principles.

As of October 2024, at least 24 countries—in addition to the EU—were considering AI legislation. These laws are in varying stages of the legislative process. Like the EU AI Act, much of the proposed legislation in other countries takes a risk categorization approach toward management of AI. Canada, for example, was considering an Artificial Intelligence and Data Act (AIDA) that would protect Canadians from high-risk systems. In August 2024, the Australian government released the Voluntary AI Safety Standard, consisting of 10 guardrails to help organizations mitigate and manage AI risks. In India, a proposed Digital India Act would replace the IT Act of 2000 to regulate high-risk AI systems. As with U.S. law, it is likely that over the next few years we will see numerous other countries implement AI legislation closely modeled on the EU AI Act’s risk-based approach.

NIST AI Risk Management Framework

As the name suggests, the RMF is intended to provide guidance for organizations designing, developing, deploying, or using AI systems to manage the risks of AI and promote the trustworthy development of AI. Issued in January 2023, the RMF predates the passage of both the EU AI Act and the Colorado AI Act. For many U.S. businesses, the RMF has become a standard governance guide for AI compliance.

Although not legally binding for most U.S. businesses, the RMF fits easily into the emerging risk-based compliance scheme reflected in both the EU AI Act and the Colorado AI Act. The RMF provides significant detail to help organizations frame risk. More so than existing law, the framework identifies a wide variety of potential harms, beyond simply harm to people, including harm to an organization as well as harm to an ecosystem. Characteristics of trustworthy AI systems include validity and reliability, safety, security and resilience, accountability and transparency, explainability and interpretability, privacy enhancement, and fairness with harmful bias managed.

A significant difference between the RMF and existing AI laws is that the framework does not attempt to prescribe compliance steps. Rather, the RMF provides guidance to help organizations govern, map, measure, and manage AI risks.

Organizations that are not presently subject to either the EU AI Act or the Colorado AI Act often choose to follow the RMF because it provides a well-recognized compliance standard for the development of AI that, in principle, aligns with developing AI law. Put differently, following the RMF can often help ensure baseline compliance with the EU AI Act and the Colorado AI Act. Organizations that properly frame AI-related risks in accordance with the RMF, for example, should identify high-risk AI that would require additional compliance steps under existing law.

NIST also released guidance documents designed to assist companies in developing generative AI. The documents are the AI RMF Generative AI Profile (NIST AI 600-1) and the Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST SP 800-218A).

Litigation and Insurance Considerations

Currently, there is little insurance coverage written specifically for AI-related risks, and there are few AI-specific provisions, such as exclusions, written into current policies, including cyber policies. Additionally, the EU AI Act and Colorado AI Act do not provide private rights of action. However, given that AI-related risks implicate not only the newly enacted or contemplated AI regulations discussed above but also traditional “terrestrial” legal principles, it is likely that such AI-related risks can trigger litigation under other existing laws and insurance coverages in existing “traditional” policies.

For example, as noted above, a key focus of the Colorado AI Act is the protection from algorithmic discrimination, and AI-related legislative proposals in states such as California and Illinois include bills targeting algorithmic discrimination in the employment context. Moreover, even in the absence of AI-specific antidiscrimination laws, existing antidiscrimination laws, including federal civil rights laws, can also be deployed to target such algorithmic discrimination.

Illustrating this point is a first-of-its-kind settlement of a lawsuit filed by the Equal Employment Opportunity Commission (EEOC) against iTutorGroup, Inc. (iTutor), a provider of remote English-language tutoring services to students in China. As described in the EEOC’s press release announcing the settlement, in that lawsuit, the EEOC alleged that iTutor programmed its job application software “to automatically reject female applicants aged 55 or older and male applicants aged 60 or older,” resulting in the rejection of “more than 200 qualified applicants . . . because of their age” in violation of the Age Discrimination in Employment Act (ADEA).

The iTutor settlement is in line with the EEOC’s continued focus on the potentially discriminatory impacts of employers’ use of AI. Among such efforts are the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative launched in 2021, as well as AI-related guidance published in May 2022 addressing disability discrimination. More recently, the EEOC published guidance entitled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” and in its 2023 draft strategic enforcement plan, included updates in recognition of “employers’ increasing use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, and make or assist in hiring decisions.”

Not surprisingly, comparable private party lawsuits alleging algorithmic discrimination are also being filed. For example, in Mobley v. Workday, Inc., the plaintiff alleged that Workday, a software vendor providing AI-driven screening tools to aid employers in choosing job applicants, should be held liable for the discriminatory effects of its AI screening tools under federal antidiscrimination laws. The plaintiff, Derek Mobley, alleged that his applications for 80–100 jobs with employers that used Workday’s AI screening tools were rejected because those tools enabled the employers to discriminate against applicants, including on the basis of protected categories such as age. Mobley thus alleged that in disseminating its AI screening tools, Workday committed intentional and disparate impact discrimination in violation of Title VII of the Civil Rights Act, the ADEA, and the Americans with Disabilities Act (ADA).

Workday moved to dismiss. In a July 2024 opinion, the California federal district court denied Workday’s motion in part, ruling that Mobley plausibly alleged that Workday functioned as an agent of its employer-clients because, as alleged, those employers delegated their function of accepting or rejecting candidates to Workday and its AI screening tools. The Mobley court found that such a delegation of duties fell within the meaning of the term “agent” in the definition of “employer” under the relevant antidiscrimination laws. Among the allegations upon which the court relied to find that Workday acted as an agent was that, as illustrated by Mobley having received rejection emails in the middle of the night, Workday’s AI software itself automatically rejected or moved candidates forward in the hiring process.

Even more significantly, the Mobley court expressly found it irrelevant that it was Workday’s AI tools, as opposed to a natural person, that engaged in the allegedly discriminatory conduct:

Moreover, Workday’s role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being who is sitting in an office going through resumes manually to decide which to reject. Nothing in the language of the federal anti-discrimination statutes or the case law interpreting those statutes distinguishes between delegating functions to an automated agent versus a live human one. To the contrary, courts applying the agency exception have uniformly focused on the “function” that the principal has delegated to the agent, not the manner in which the agent carries out the delegated function.

As the iTutor and Mobley lawsuits illustrate, existing legal frameworks, including those unrelated to technology at all, are as applicable to AI-related liabilities as are regulations specifically targeting AI. Likewise, despite not specifically mentioning or targeting AI-related risks, to the extent coverage is otherwise available, existing insurance policy forms can be responsive to AI-related liabilities, particularly where, as was true for the iTutor and Mobley cases, such liabilities are grounded on well-established liability theories.

The iTutor lawsuit aptly illustrates this point. In the iTutor case, the EEOC alleged that iTutor’s algorithmic screening of applicants for employment was discriminatory on the basis of age in violation of the ADEA. Meanwhile, a typical employment practices liability insurance (EPLI) policy form, which is designed to respond to such claims involving employment-related violations, may have been responsive to the EEOC’s lawsuit if such coverage had been purchased.

In a typical EPLI policy, the insurer promises to pay for losses, including the costs of defense, resulting from claims, including lawsuits, against the insured company alleging various types of employment-related wrongs. For example, in the insuring agreement of one typical form, the insurer promises to “pay on behalf of the Insured, Loss from Claims made against the Insured during the Policy Period . . . for an Employment Practice Act.” The same policy form defines “Claim” to include “a judicial or civil proceeding commenced by the service of a complaint” and “Employment Practice Act” to include “violation of any federal, state or local civil rights laws.” As a lawsuit, iTutor qualifies as a “claim” that alleges an employment practice act, i.e., violation of the ADEA, and thus there appears to be little question that the iTutor case would trigger coverage under a typical EPLI policy, such that in the absence of any relevant exclusion, and subject to the remaining policy terms, EPLI insurance would likely be obligated to respond.

Employment law is not the only area of the law giving rise to AI-related litigation. For example, now that the Securities and Exchange Commission (SEC) has begun to focus on so-called “AI-washing,” numerous lawsuits alleging such AI-washing have also been filed. Comparable to allegedly misleading public company disclosures relating to a company’s “green” bona fides, AI-washing focuses on allegedly misleading public company disclosures relating to AI. Among the securities class actions based on alleged AI-washing is one recently filed in the U.S. District Court for the Northern District of California against software development platform GitLab Inc.

In GitLab, the plaintiff shareholder alleges that the company disclosures overstated the company’s ability to develop AI software features that supposedly would increase demand for the company’s software, and thus misled investors. Accordingly, assuming that GitLab has public company directors and officers (D&O) insurance coverage, which is expressly designed to insure public companies against losses arising out of “securities claims,” including lawsuits alleging violations of securities laws, it is reasonable to expect that such a lawsuit should trigger the company’s D&O policy. In addition, companies that encounter disruptions or losses involving AI, even in the absence of AI-washing allegations, could face lawsuits alleging that executives failed to meet their fiduciary obligations to the company. Such lawsuits could also implicate the companies’ D&O coverage.

Needless to say, there are areas of the law beyond employment and securities law that could give rise to AI-related liabilities. For example, companies could also potentially face liability based on allegations that a plaintiff’s bodily injuries somehow resulted from AI, which would then potentially implicate general liability insurance, the traditional source of coverage for bodily injury claims.

In sum, though AI-related litigation and risk are already emerging, the extent of such risks and the insurance ramifications for those risks remain to be seen.

Best Practices to Consider

When considering best practices in light of the wave of AI legislation, companies should:

  • Determine if any high-risk AI systems are being developed or used or will be developed or used by the company (i.e., any AI systems that make or are a substantial factor in making “consequential decisions”).
  • Identify what role the company plays regarding the AI system (i.e., developer or deployer).
  • Develop or update any and all existing AI governance policies to comply with a nationally or internationally recognized AI risk management framework like the RMF.
  • Draft and implement a risk management policy and program if deploying a high-risk AI system.
  • Prepare required public-facing notices on the development or use of a high-risk AI system.
  • Establish processes for detecting and mitigating algorithmic bias arising from the use of high-risk AI systems.
  • Establish regular audits and train employees on the proper use of AI to ensure ongoing compliance with applicable AI laws.
  • Designate an individual to oversee compliance with applicable AI laws’ requirements for human oversight of the AI system.
  • Organize processes for complete impact assessments, if deploying a high-risk AI system.
  • Prepare processes to notify the relevant regulators of algorithmic discrimination caused or reasonably likely caused by a high-risk AI system.
  • Be aware of new rules that may be issued by the regulators.

Compliance and Governance in a Shifting Legal Landscape

Compliance with the EU AI Act, Colorado AI Act, RMF, and other AI legislation requires that covered entities undergo a series of self-classification exercises, focused initially on whether the entity develops or deploys AI and secondarily on the type of AI systems being implemented that corresponds to the risk classification level. Understanding this classification structure will be key for U.S. entities using or considering the use of AI in both the U.S. and EU marketplaces. Considering the expected impact that the EU AI Act will have on the development of AI law worldwide, mastering these new laws and classifications will be not only vital to all entities but also a cornerstone of overall AI governance moving forward.

Other countries, U.S. states, and government agencies will continue to adopt their own laws and regulations for governing AI, and the passing of such legislation will likely only accelerate these changes. There may potentially be contentious issues arising between those that favor stricter or more lenient policies and standards as this legislation develops. Additionally, while the Trump administration is favoring self-regulation of AI by the private sector to foster innovation over regulation, the states are tending toward strict legislation to provide secure, unbiased, and transparent use of AI. Companies will want to continue to monitor the legal landscape of AI, including not just the laws and regulations that pass but also the litigation and applicability of insurance coverages to help mitigate risks with respect to the AI systems the companies develop or deploy.

    Authors