chevron-down Created with Sketch Beta.

Business Law Today

July 2024

Colorado Enacts Law Regulating High-Risk Artificial Intelligence Systems

Tiyanna Danielle Lords

Summary

  • On May 17, 2024, Colorado enacted SB 205, which regulates the use of high-risk artificial intelligence systems by developers and deployers (e.g., users of such systems) to protect consumers from unfavorable and unlawful differential treatment through adverse decision-making.
  • The legislation defines an artificial intelligence system as “high risk” when it is deployed to make, or is a substantial factor in making, a consequential decision that has a material legal or similarly significant effect on access to or terms of a specific set of opportunities and services, such as employment.
  • Colorado SB 205 requires developers and deployers of high-risk artificial intelligence systems to comply with extensive monitoring and reporting requirements to demonstrate reasonable care has been taken to prevent known or reasonably foreseeable risks of algorithmic discrimination.
  • The bill requires compliance on or after February 1, 2026.
Colorado Enacts Law Regulating High-Risk Artificial Intelligence Systems
iStock.com/Viktor Cvetkovic

Jump to:

On May 17, 2024, Colorado enacted SB 205, broadly regulating the use of high-risk artificial intelligence systems to protect consumers from unfavorable and unlawful differential treatment. The bill, which requires compliance on or after February 1, 2026, declares that both developers and users of high-risk artificial intelligence systems must comply with extensive monitoring and reporting requirements to demonstrate reasonable care has been taken to prevent algorithmic discrimination. A violation of the requirements set forth in Colorado SB 205 constitutes an unfair trade practice under Colorado’s Consumer Protection Act.

What Is an Artificial Intelligence System?

Colorado SB 205 defines “artificial intelligence system” as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”

The artificial intelligence system becomes “high risk” when it is deployed to make, or is a substantial factor in making, a consequential decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) education enrollment or an education opportunity; (b) employment or an employment opportunity; (c) a financial or lending service; (d) an essential government service; (e) health-care services; (f) housing; (g) insurance; or (h) a legal service.

A high-risk artificial intelligence system does not include, among others, technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.

Affirmative Obligations for Developers

Colorado SB 205 requires a developer of a high-risk artificial intelligence system to make available to the deployer, or user of the artificial intelligence system:

(a) A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system;

(b) Documentation disclosing:

(I) High-level summaries of the type of data used to train the high-risk artificial intelligence system;

(II) Known or reasonably foreseeable limitations of the high-risk artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system;

(III) The purpose of the high-risk artificial intelligence system;

(IV) The intended benefits and uses of the high-risk artificial intelligence system; and

(V) All other information necessary to allow the deployer to comply with the requirements of Section 6-1-1703 [Deployer Duty to Avoid Algorithmic Discrimination];

(c) Documentation describing:

(I) How the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer;

(II) The data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation;

(III) The intended outputs of the high-risk artificial intelligence system;

(IV) The measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and

(V) How the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and

(d) Any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.

Affirmative Obligations for Deployers

Colorado SB 205 requires deployers of a high-risk artificial intelligence system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. Reasonable care is demonstrated by the deployer’s implementation of a risk management policy and program governing the deployment of the high-risk artificial intelligence system, completion of an annual impact assessment, and disclosure to consumers when they are interacting with an artificial intelligence system or when the system has made a decision adverse to the consumer’s interests.

Risk Management Policy and Program

The risk management policy and program must be “an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates.” It must incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination.

The risk management policy and program must be reasonable considering:

(I) (a) The guidance and standards set forth in the latest version of the “Artificial Intelligence Risk Management Framework” published by the National Institute of Standards and Technology in the United States Department of Commerce, Standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of [the bill]; or (b) Any risk management framework for artificial intelligence systems that the Attorney General, in the Attorney General's discretion, may designate;

(II) The size and complexity of the deployer;

(III) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and

(IV) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer.

Impact Assessment

An impact assessment must be completed annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. The impact assessment must include, at a minimum, and to the extent reasonably known by or available to the deployer:

(I) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system;

(II) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks;

(III) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces;

(IV) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system;

(V) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system;

(VI) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and

(VII) A description of the post-deployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system.

The impact assessment must also include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with or varied from the developer's intended uses of the high-risk artificial intelligence system. A deployer must maintain the most recently completed impact assessment, all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system.

Notice to Consumer

On and after February 1, 2026, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer must:

(I) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made;

(II) Provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement . . . ; and

(III) Provide to the consumer information, if applicable, regarding the consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer. . . .

The deployer must also comply with substantial notice requirements if the high-risk artificial intelligence system makes a consequential decision that is adverse to the consumer and allow the consumer to appeal or correct any incorrect personal data that the high-risk artificial intelligence system processed in making the decision.

If a deployer deploys a high-risk artificial intelligence system and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, must send to the Attorney General, in a form and manner prescribed by the Attorney General, a notice disclosing the discovery.

A deployer who uses a high-risk artificial intelligence system that is intended to interact with consumers must ensure it discloses to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. Disclosure is not required under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.

Website Disclosures

A developer must make available, in a manner that is clear and readily available on the developer’s website or in a public use case inventory, a statement summarizing:

(I) The types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and

(II) How the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with [the above].

Similarly, a deployer must also make available on its website a statement summarizing:

(I) The types of high-risk artificial intelligence systems that are currently deployed by the deployer;

(II) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system . . . ; and

(III) In detail, the nature, source, and extent of the information collected and used by the deployer.

Exemptions

These requirements do not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed,

(a) The deployer:

(I) Employs fewer than 50 full-time equivalent employees; and

(II) Does not use the deployer’s own data to train the high-risk artificial intelligence system;

(b) The high-risk artificial intelligence system:

(I) Is used for the intended uses that are disclosed to the deployer as required by [the developer]; and

(II) Continues learning based on data derived from sources other than the deployer’s own data; and

(c) The deployer makes available to consumers an impact assessment that:

(I) The developer of the high-risk artificial intelligence system has completed and provided to the deployer; and

(II) Includes information that is substantially similar to the information in the impact assessment required [to be submitted by the deployer pursuant to the requirements of the bill].

    Author