Summary
- The increased usage of AI has broadened regulatory concerns about potential consequences of AI decision-making, especially in decisions related to employment, healthcare, finance, and insurance.
Artificial intelligence (AI) has grown in importance across all business functions, a process that has significantly accelerated in recent years. Increased usage has broadened regulatory concerns about potential consequences of AI decision-making, especially in decisions related to employment, healthcare, finance, and insurance. In 2016, Cathy O’Neil detailed these concerns in her award-winning nonfiction bestseller, Weapons of Math Destruction. She highlighted the experience of Kyle Behm, a Vanderbilt undergraduate student who took a break from college to address mental health issues. After his condition improved, he applied for a part-time, minimum-wage job at a Kroger grocery store. “[I]t seemed like a sure thing,” but Kroger’s personality test “red-lighted” him, so he was considered unqualified.
Unbeknownst to Mr. Behm, the U.S. Equal Employment Opportunity Commission (EEOC) had been warning employers about misuse of testing tools since 2007. The EEOC published guidance on limitations essential to such tests. The EEOC guidance explained that both intentional and unintentional discrimination were prohibited. Tests could “violate the federal anti-discrimination laws if they disproportionately exclude people in a particular group by race, sex, or another covered basis, unless the employer can justify the test or procedure under the law.” In addition to discrimination, such tests may be unlawful if they violate the Americans with Disabilities Act (ADA), which prohibits medical examinations and disability-related questions prior to making a conditional job offer.
Despite the known risks of personality testing, the market for these tests and other selection tools is growing. “Personality testing is roughly a $2 billion industry, according to Tomas Chamorro-Premuzic, a psychology professor and author of I, Human.” With the advent of AI systems, employers can expand their screening from interactive interviews to conducting trait assessments based only on the candidate’s resume and cover letter. For instance, the EEOC recently entered into a consent decree with iTutorGroup Inc. for using an automated screening tool designed to reject female candidates over the age of fifty-five and their male counterparts over the age of sixty.
AI has enabled the expanded use of automated tools. Recent EEOC guidance highlights how AI can be integrated into the hiring process:
resume scanners that prioritize applications using certain keywords; employee monitoring software that rates employees on the basis of their keystrokes or other factors; “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test.
Enterprises can use chatbots to collect and store answers provided by potential customers and employees without the party even being aware that the tool is part of a screening process. Natural language processing technology enables chatbot systems to be calibrated to evaluate a person’s language for both intellectual and emotional characteristics. OpenAI’s “ChatGPT-3.5 demonstrated an exceptional ability to differentiate and elucidate emotions from textual cues, outperforming human[s].” ChatGPT-4, the next generation system, significantly outperformed the earlier system, and ChatGPT-4o outperforms its predecessor. “Pre-employment screening tests are becoming more common. . . . [Such tests include] psychometrics assessments, which aim to quantify characteristics we often think of as intangible: personality, attitudes, integrity or ‘emotional intelligence.’” As these systems become more ubiquitous, the public will have fewer options about refusing to engage in such screenings, and the screenings are likely to expand from employment to housing, finance, insurance, and other fields. The EEOC has added technical guidance specifically to address the growing concerns raised by algorithmic bias and AI systems.
Despite the EEOC’s emphasis on AI, however, the EEOC’s recent guidance does not differ from the earlier guidance on discrimination in testing. The EEOC guidance emphasizes that the legal consequences of disparate impact are not eliminated by offloading the decision-making to an AI system or to a third-party vendor.
The EEOC seeks to expand the reach of anti-discrimination laws to developers of screening tools through novel case law. In Mobley v. Workday, Inc., the EEOC argued that liability should be imposed on the software vendor that supplied an algorithmic tool that was used by an employer to discriminate illegally. As alleged, “Workday provides employers with automated ‘applicant screening systems’ that incorporate ‘algorithmic decision-making tools.’ These tools analyze and interpret resumes and applications and then ‘determine whether an employer should accept or reject an application for employment.’” More generally, the screening tools sold to employers may not be used to make the final hiring decision, but they will be the sole decision maker for candidates rejected from the hiring pool.
The challenge for the EEOC is that federal statutes provide liability only for certain parties within the employment process. One party regulated by those statutes is an “employment agency,” which is defined as “any person regularly undertaking with or without compensation to procure employees for an employer.” The EEOC explained that “[s]creening and referral activities are among those classically associated with employment agencies.”
If Workday is ultimately liable as an “employment agency,” it will be due to the company’s business plan rather than anything unique to the AI products. The challenge for the EEOC is that the typical product developer does not conduct the activities necessary to be classified as a “screening and referral” service. Even if a software vendor created a tool that defaulted to discriminatory practices, the sale of the tool alone would not transform the company into an “employer” or an “employment agency.”
Under a second theory, the EEOC could hold Workday liable as an “indirect employer.” The EEOC finds support for this approach in Sibley Memorial Hospital v. Wilson. The D.C. Circuit opinion explained that parties “who are neither actual nor potential direct employers of particular complainants” may nonetheless be held liable under Title VII of the Civil Rights Act of 1964 when they “control access to such employment and . . . deny such access by reference to invidious criteria.” A gatekeeper to employment that deploys illegal criteria would be liable as an indirect employer. Indirect employer liability is included in the EEOC Compliance Manual.
The use of gatekeeping software might be sufficient to make the developers into gatekeepers. In Association of Mexican-American Educators v. California, the Ninth Circuit explained that “an entity that is not the direct employer of a Title VII plaintiff nevertheless may be liable if it ‘interferes with an individual’s employment opportunities with another employer.’” As the court noted, however, its “conclusion [was] dictated by the peculiar degree of control that the State of California exercise[d] over local school districts.” Thus, despite certain broad language in the opinion, the court narrowly applied those words to the facts. It is unclear whether courts will extend liability to software developers, even if a particular developer dominates the market.
Finally, vicarious liability may also apply because Title VII, the ADA, and the Age Discrimination in Employment Act (ADEA) extend liability from the employer to “any agent” of the employer. The EEOC Compliance Manual provides that “[a]n entity that is an agent of a covered entity is liable for the discriminatory actions it takes on behalf of the covered entity.” In other contexts, a buyer-seller relationship is insufficient to hold one company as agent for the other.
These three theories of the EEOC simply do not provide sufficient breadth to address the concerns raised by the massive reliance on employment screening tools fueled by algorithmic decision making. To address the gaps in the federal statutory scheme and protect civil rights, municipalities and states have begun to address the problem.
On May 17, 2024, Colorado took the lead, becoming the first state to enact legislation to address algorithmic bias. Commonly known as the Colorado AI Act, the legislation is designed to provide “Consumer Protections in Interactions with Artificial Intelligence Systems.” To address problems illustrated by Mobley v. Workday, Inc., the statute defines obligations both for “developers” of AI systems and “deployers” of those systems. The definition of “deployer” is limited to one doing business in Colorado that deploys a “high-risk artificial intelligence system,” which is defined as “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.”
“Consequential decision” means:
a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of:
- (a) education enrollment or an education opportunity;
- (b) employment or an employment opportunity;
- (c) a financial or lending service;
- (d) an essential government service;
- (e) health-care services;
- (f ) housing;
- (g) insurance; or
- (h) a legal service.
As of February 1, 2026, Colorado will impose a duty on developers and deployers to avoid “algorithmic discrimination” when using a “high-risk artificial intelligence system.” “Algorithmic Discrimination” means:
any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.
The state law will offer broader civil rights protections than Title VII, the ADA, and the ADEA because the Colorado statute protects those who are “perceived” to fall into the protected classes. It also identifies language proficiency as a basis for discrimination.
Even with the careful, nested definitions, the phrase “material legal or similarly significant effect” is left undefined. The term is a reference to the European Union’s General Data Protection Regulation. The Colorado AI Act imposes greater regulatory requirements on high-risk activities, like the EU Artificial Intelligence Act, which “takes a risk-based approach to regulation, applying different rules to AI systems according to the threats they pose . . . to health, safety, or rights of individuals.”
Keeping with the risk approach as well as seeking to manage the scope of the law, the Colorado AI Act identifies many common uses of AI that are excluded from the definition of “high-risk artificial intelligence system,” including fraud protection, cybersecurity, data storage, and calculators.
The statute includes a carve-out for small businesses, excluding those that employ fewer than fifty employees. It offers no safe harbors tied to minimal in-state revenue or service of few in-state customers.
The statute and the regulations that are likely to follow can be enforced only by the state’s attorney general. No private right of action is authorized. Presumably, the enforcement discretion by the attorney general will further limit the application of the law to those cases that have significant impact on Colorado residents.
The Colorado AI Act does not establish comprehensive AI regulations. Instead, it addresses two concerns. First, it requires that the public receive notice when a company is using AI for communications or other purposes. Second, it limits algorithmic bias or discrimination.
To address the public disclosure requirement, the Colorado AI Act requires deployers of AI systems to “ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system,” unless “it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.” The Colorado AI Act does not mandate that the disclosure describe how the data will be used, but it is likely that such a requirement will be added to the regulations that may follow. In particular, if companies are found to be using chatbots to prescreen candidates for employment, housing, or other protected services without having disclosed that use of the system, then the notice could be deemed inadequate by the attorney general.
A chatbot could be misused to screen and discriminate in a number of ways. First, in conversation with a human, the chatbot could elicit information about the person’s interests or activities in a manner that collects and later evaluates the individual based on those classifications prohibited under the law. Second, the chatbot could “listen” to the human’s word choice, typing speed if text-based, accent if audio-based, and predict ethnicity and assign values based on those characteristics. Accent discrimination is often a surrogate for other forms of illegal discrimination.
Colorado is not the first jurisdiction to require mandatory disclosure regarding the use of AI systems. California adopted the Bolstering Online Transparency (BOT) Act on September 28, 2018. Unlike Colorado’s focus on eliminating discrimination, earlier chatbot disclosure laws commonly focused on dating app fraud, where participants believed they were communicating with other individuals to develop relationships, when instead they were being milked for fees by automated response systems.
Utah has a similar law on chatbot disclosure. The Utah Artificial Intelligence Policy Act went into effect on May 1, 2024. Its scope is narrower than the Colorado AI Act, primarily focusing disclosure of the use of generative AI while clarifying that deployers of generative AI systems are responsible for the output of those systems when the output violates various consumer protection laws.
Colorado breaks new ground with its second requirement. The more prominent aspect of the Colorado law focuses on eliminating algorithmic bias in consequential decisions when automated AI systems are used to make selections involving employment, health care, legal services, or other high-risk categories. Colorado has often been the center of controversy over civil rights protection, and concerns over bias creeping into automated decision-making represents a violation of important state anti-discrimination efforts.
The central provision of the Colorado AI Act offers a deceptively simple formula for this protection. “[A] developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system.” The act establishes a rebuttable presumption that the developer of the high-risk artificial intelligence system used reasonable care if the developer followed the steps of the statute’s safe harbor and any applicable regulations.
To earn the presumption of reasonableness, a developer must provide to the deployers numerous disclosures, including a “general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system”; summary descriptions of the training data; documentation of foreseeable known harms; intended benefits of the system; evidence of the system’s evaluation prior to deployment; data governance procedures; harm mitigation steps; instructions to limit harmful uses of the system; and more.
A developer or deployer must gather and publish a variety of technical information to take advantage of the presumption of care afforded under the statute. In addition to the required documentation that the Developer must provide to the deployer of the high-risk artificial intelligence system, the developer also must provide the public with a disclosure document that identifies “the types of high-risk artificial intelligence systems that the developer has developed or . . . modified” and “how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise” from those systems.
Deployers of high-risk artificial intelligence systems must also act with reasonable care and can benefit from a presumption of reasonable care if they meet the safe harbor requirements. As of February 1, 2026, deployers must adopt the NIST’s Artificial Intelligence Risk Management Framework or its equivalent, comply with any risk management framework designated by the Colorado Attorney General, complete an assessment or impact statement within the first ninety days of initial deployment, and update the impact statement annually.
Deployers must notify an individual if the system makes a consequential decision concerning that individual and offer a process to appeal that decision. The statute incorporates data protection safeguards that enable an individual to correct personal data used to make a consequential decision.
By establishing a duty of reasonable care when deploying high-risk artificial intelligence systems, Colorado sends a clear message to businesses that they have an affirmative duty to understand the tools they use and to identify potential harms before deploying these systems. The opportunity, however, to comply with the disclosure requirements is both onerous and vague. Clarifying regulations will be essential to enable companies to meet those requirements.
The Colorado AI Act will not take effect until February 1, 2026. This long delay enables the Colorado Attorney General to develop regulations and the legislature to clarify the steps necessary to come within the presumption of reasonableness. In addition, the delay reflects a degree of ambivalence evidenced by Colorado Governor Jared Polis. He expressed those concerns in the letter that accompanied his signature on the bill.
This law creates a complex compliance regime for all developers and deployers of Al doing business in Colorado, with narrow exceptions for small deployers. There are also significant, affirmative reporting requirements between developer and deployer, to the attorney general, and to consumers. . . . And while the guardrails, long timeline for implementation and limitations contained in the final version are adequate for me to sign this legislation today, I am concerned about the impact this law may have on an industry that is fueling critical technical advancements across our state for consumers and enterprises alike.
Governor Polis’ letter made clear that he would welcome federal law to preempt the efforts of state regulation. Moreover, the Governor suggested that, by enacting the legislation, Colorado fuels the imperative to address concerns of algorithmic discrimination at a national level. Notwithstanding certain preferences for federal legislation, other states are likely to enact regulations to protect their citizens from algorithmic discrimination. For example, the California Privacy Protection Agency proposed rules to regulate automated decision-making technologies. Despite the complexity of the safe harbor, the core requirement that holds both developers and deployers of high-risk artificial intelligence systems subject to reasonable care requirements provides an important baseline for AI development. Unlike the European Union Artificial Intelligence Act, however, there are no categories of high-risk algorithmic decision-making that are prohibited under the law.
The long time period before the law goes into effect creates an opportunity for development of clarifying regulations and negotiations to amend the statutory requirements. At a practical level, there are a myriad of technical issues regarding the ability of a Developer or Deployer to gather and publish the kind of information necessary to take advantage of the presumption of care afforded under the statute.
Moving from its practical aspects, the Colorado AI Act invites questions about its constitutionality. The constitutional challenges are likely to focus on the act’s effect on interstate commerce under the Dormant Commerce Clause and its restrictions on speech under the First Amendment.
“By its terms, the Commerce Clause grants Congress the power ‘[t]o regulate Commerce . . . among the several States . . . .’” There is no question that, under the Commerce Clause, the federal government could preempt Colorado’s AI Act and the entire field of artificial intelligence regulation if it were to expressly choose to do so, or it could enact laws that covered the entire field, leaving the states no room to supplement the federal regulatory scheme. The more complex question is the extent to which the Commerce Clause prohibits state laws when there is no federal regulation. A state may not regulate commerce that occurs outside the state or discriminate against out-of-staters. The open question is the extent to which an in-state regulation can impact commerce outside that state.
The Supreme Court recently addressed the Dormant Commerce Clause. In National Pork Producers Council v. Ross, the Court narrowly upheld California’s Proposition 12, which prohibits the sale of pork products in California from pigs that were “confined in a cruel manner.” In doing so, the Court limited the longstanding balancing requirement of Pike v. Bruce Church, Inc., under which “a court must at least assess ‘the burden imposed on interstate commerce’ by a state law and prevent its enforcement if the law’s burdens are ‘clearly excessive in relation to the putative local benefits.’” Although a majority of the Court refused to apply Pike’s balancing test in Ross, only a plurality of the Court seemed to support doing away with the test altogether. Where the developers of high-risk artificial intelligence systems are largely out of staters, the claims of discrimination and the ambiguities in Ross may embolden lower courts to invalidate intrusive regulations on the basis of the Dormant Commerce Clause, despite the Court’s narrow decision in Ross.
The Colorado AI Act is also likely to be challenged as an unconstitutional regulation of speech. An Artificial Intelligence System that “generate[s] outputs, including content,” is a method of producing speech. Restricting a deployer’s publication of speech, therefore, is a law abridging free speech. Second, as a matter of basic First Amendment jurisprudence, the “government has no power to restrict expression because of its message, its ideas, its subject matter, or its content.” Litigants will argue that Colorado is regulating chatbot output based on its content.
There are only a few categorical exceptions to First Amendment protection: obscenity, child pornography, incitement, fighting words, defamation with actual malice, and true threats, as well as communicative crimes including fraud, extortion, and perjury. Discrimination remains illegal even though speech is used to conduct the discrimination. But the line between illegal conduct and protected speech is never clear cut.
The Colorado AI Act does not bar the output of the AI system. At the same time, the substantial disclosure requirements needed to gain protection of the presumption of reasonableness could create an undue burden on speech. Unreasonable disclosure requirements are also unconstitutional, particularly where they are used to discourage content or viewpoint.
To be constitutional, the law must focus on the conduct or the use of the speech, rather than speech itself. Were enforcement actions based on the output of chatbots, rather than on material, adverse decisions by those using the systems, then the Supreme Court could well interpret the statute to be a de facto licensing scheme for chatbots in violation of the fundamental principle that the First Amendment allows no prior restraint, as well as an unconstitutional form of content or viewpoint discrimination.
If the application of the Colorado AI Act is limited to the outcome regarding the “material legal or similarly significant effect” of consumer services in the specified high-risk categories, those determinations can be characterized as communicative conduct. To protect access to services and non-discriminatory employment opportunities, the government has a substantial, if not compelling, interest in protecting the consumer from discrimination or error, and the law is likely narrowly tailored to achieve that goal. If the law is applied in this narrow fashion, then courts should apply intermediate scrutiny in upholding the law, but even under strict scrutiny, there is a compelling government interest and ample alternatives for communication.
Ultimately, the scope of the Colorado AI Act, the California Automated Decisionmaking Technology regulations, and other efforts to address algorithmic discrimination will need to be tested against both Commerce Clause and First Amendment principles to properly protect the public from intentional discrimination and the invidious disproportionate effects of discriminatory algorithms while promoting innovation and protecting free speech.
The EEOC interpretations of the law proposed in Mobley further these goals, but the agency may not have the necessary statutory authority to adopt or enforce those interpretations. Governor Polis may be correct that federal regulation would be the best way to achieve this balance. The Colorado AI Act offers an opportunity to further the discussion and encourage federal legislation.