chevron-down Created with Sketch Beta.

The Business Lawyer

Winter 2024-2025 | Volume 80, Issue 1

Governing AI:  Building Sustainable Programs to Capture the Evolving Technology of Artificial Intelligence Across Both Regulated and Unregulated Industries

Reena Bajowala

Summary

  • Over the past year, the regulation of artificial intelligence has shifted.
  • This article will examine the evolution of AI laws and the year’s AI laws.
  • The article will also discuss how insurance regulations are used to regulate AI.
Governing AI:  Building Sustainable Programs to Capture the Evolving Technology of Artificial Intelligence Across Both Regulated and Unregulated Industries
iStock.com/scampdesigns

Jump to:

The regulation of artificial intelligence (AI) has shifted in focus in several ways over the past year. One major shift has been away from laws that imposed a simplified set of compliance obligations for the deployer of an AI system, that principally required disclosure to the consumer and either consent to the use of AI or an opt-out or objection to the use of AI in connection with their data. The shift has moved toward legislation that places the onus on various parties in the AI ecosystem (not limited to the deployer) to implement comprehensive AI governance programs.

Of particular note, this year brought two groundbreaking pieces of legislation: one in the United States and one in the European Union. While drafts of the European Union’s Artificial Intelligence Act (AIA) have been bandied about in the European Union since its introduction in April 2021, Colorado stepped to the front of the line on May 17, 2024, by passing the world’s first law of general applicability to govern uses of AI: the Colorado Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems (CAIA). The European Parliament passed the AIA on March 13, 2024, and it was unanimously approved by the EU Council on May 21, 2024. Both laws explicitly call for components of a governance program that require efforts far above and beyond the prior disclosure-and-consent mechanisms.

To table-set, it is important to start with a common understanding of “governance.” According to the AI Risk Management Framework (RMF) set out by the National Institute of Standards and Technology (NIST), “governance” is “designed to be a cross-cutting function to inform and be infused throughout” three other functions set forth in the RMF: map, measure, and manage. ForHumanity, an entity that has proposed another leading risk management framework for AI, defines “governance” as a “structure of rules, practices, and processes used to direct, manage and oversee an entity.” For the purposes of this survey, the term “governance” refers to the oversight of a program that sets forth rules to manage the appropriate and effective use of AI within an organization. Those rules should include policies, personnel, and principles.

In Part I of this survey, I first provide context on the evolution of AI laws by briefly reviewing significant preexisting AI laws, which focused on deployers and are structured as disclosure-and-consent models. In Part II, I provide an overview of recently enacted AI laws and delve into their key features: (A) compliance obligations for more than just deployers, (B) requirements to implement risk management and other policies, (C) development of a governance program, and (D) requirements for testing and auditing. In Part III, I turn to a specific regulated industry–insurance–as persuasive authority on best practices for AI governance. In Part IV, I set forth key features of a best-in-class AI governance program drawn from AI laws from across industries, as well as practical experience. Section V offers a brief conclusion.

I. Evolution of AI Laws

Until this past year, very few laws applicable to AI or the manifestations of AI technology were in place. Two laws were put into place in the United States prior to this year that will help mark the evolution of these regulations. The Illinois Artificial Intelligence Video Interview Act (AIVIA) went into effect January 1, 2020. The AIVIA requires employers that ask applicants to record video interviews and that use AI analysis of the applicant-submitted video to provide notice to each applicant, explain how the AI works, and obtain, before the interview, consent from the applicant. There are additional provisions that limit sharing of such videos and require their deletion upon request.

Likewise, California’s Bolstering Online Transparency (B.O.T.) Act, which became operative on July 1, 2019, requires a person using a bot to communicate or interact with another person to clearly, conspicuously, and reasonably disclose the bot. The statute expressly prohibits the use of a bot “with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication” in two circumstances: first, to incentivize a purchase or sale of goods or services in a commercial transaction or, second, to influence a vote in an election. The statute, however, eliminates liability if the usage of a bot is clearly, conspicuously, and reasonably disclosed.

Notably, there is no risk management, governance program, policies, testing, or auditing required for either law.

II. This Year’s AI Laws

In contrast with the laws of the past, the CAIA and AIA have much more robust compliance requirements. The CAIA applies to developers and deployers and aims to avoid “algorithmic discrimination” in the use of a “high-risk” AI system. A “high-risk” AI system is “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.” The law also defines “consequential decision” as:

a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of, (a) education enrollment or an education opportunity; (b) employment or an employment opportunity; (c) a financial or lending service; (d) an essential government service; (e) health-care services; (f ) housing; (g) insurance; or (h) a legal service.

Note that the definition of a “high-risk” AI system is subject to a series of exclusions, including use of AI in critical cybersecurity and information technology functions (e.g., firewalls, networking, spam filtering) or in providing information to consumers, provided the usage does not serve as a substantial factor in making a consequential decision relating to a consumer.

Similar in structure is the AIA. It is also a legal framework that sets forth compliance obligations that vary based on the risk level of the AI system. Certain systems are “prohibited” if they deploy subliminal techniques to purposefully manipulate or deceive, exploit vulnerable groups, assign social scores, or if they use “real time” biometric identification systems in certain circumstances. Those systems cannot be placed on the market. Systems that are deemed “high risk,” however, may be placed on the market and deployed but have a more onerous compliance burden than other permissible systems. “High-risk” systems are listed in Annex III of the AIA. A number of high-risk uses in the AIA mirror those of the CAIA: education, employment, essential government service, and at least certain financial, healthcare, and insurance services. However, the AIA also deems to be “high risk” certain applications regarding immigration and border control, justice and democratic process, biometric data, critical infrastructure management, and law enforcement. In contrast to the CAIA, the AIA does not cover legal services and housing, which are specifically identified as “high-risk” uses under the CAIA.

In analyzing the CAIA and AIA, certain themes emerge that offer guidance on constructing a comprehensive AI governance program. Specifically, both laws require compliance by both developers and deployers of AI systems, require a risk management framework, involve the use of written policies and procedures, require ongoing testing and auditing, and have overall programmatic aspects.

A. Compliance Obligations for More Than Just Deployers

A threshold distinction between the AI regulations that were previously put into place and those that came into being in the last year is the expansion of parties who are responsible for compliance. The previous laws focused solely on the deployers of such technology, but, in recognition of the reality that the parties who can best address the risks and rewards of AI systems include both the deployers and the developers of AI systems, the CAIA and the AIA both expand requirements to developers. In addition to setting forth obligations relating to both developers and deployers, the CAIA contemplates use of third parties to conduct activities, such as completing an impact assessment, developing a risk management policy, and reviewing the deployment of a high-risk AI system to ensure it is not causing algorithmic discrimination nor that such deployment intentionally and substantially modifies the AI system (which can render the deployer a developer). While not directly stated as a statutory requirement, in order to comply, entities will likely need to manage third-party vendors that supply AI systems and components, or who provide support services in connection with their statutorily required AI governance program.

The AIA further defines and regulates the interaction between the relationships of various parties in the AI ecosystem. Like the CAIA, obligations flow to both deployers and developers of AI. And the AIA sets forth transparency and disclosure requirements from developers to deployers. The developer is required to draw up technical documentation of the high-risk system and detailed instructions on use. Accordingly, there is an ongoing and connected relationship between the principal parties in the AI ecosystem.

But the AIA goes farther, and introduces additional roles in the cast of AI characters: importers and distributors. Article 25 of the AIA even sets forth responsibilities along what it calls the “AI Value Chain,” which includes any distributor, importer, deployer, or even any “other third-party.” Parties can move from the role of distributor, importer, deployer, or even any “other third-party” to being a developer under certain circumstances, including by white labeling an AI system, making substantial modifications to a high-risk AI system, and modifying the intended purpose of an AI system that re-classifies it as a high-risk system. If, under those circumstances, another party assumes the role of developer, the initial provider of the AI system is relieved of that role. The initial provider remains obligated to provide necessary information, reasonably expected technical access, and other cooperation to comply with the AIA as long as it did not clearly specify that the AI system is not to be changed into a high risk system. Lastly, the AIA requires third parties that supply “an AI system, tools, services, components, or processes that are used or integrated into a high-risk AI system” to enter into contractual terms to ensure that the provider has the information and assistance it needs to comply with the AIA. Best practice to address that connected relationship entails establishing a vendor management program that encompasses AI systems.

B. Risk Management and Other Policies

Second, the CAIA and AIA require use of written policies and procedures. A central component of an AI governance program is implementing a risk management policy. Such a policy sets out a framework to evaluate the risks and rewards for AI usage within the entity. The goals of such a policy are slightly narrower under the CAIA than under the AIA. The CAIA, for example, requires developers and deployers to “protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.” For developers, there is an added lens to identify risks from both “intended and contracted uses of ” high-risk AI systems.

Article 9 of the AIA also requires, in additional detail, that, for high-risk AI systems, a “risk management system . . . be established, implemented, documented and maintained.” The risk management system must include identifying and analyzing “known and reasonably foreseeable risks,” evaluating “reasonably foreseeable misuse,” and evaluating risks discovered through post-market monitoring. Of note, the list of harms to mitigate under the AIA are broader than under the CAIA, and those of the AIA include “known and reasonably foreseeable risks” to the “health, safety or fundamental rights” of data subjects, including in special consideration to a system that is “likely to have an adverse impact on persons under the age of 18 and, as appropriate, other vulnerable groups.” The system must also include adoption of risk management measures, which should incorporate technical solutions as well as other measures to mitigate the harm. Further, a fundamental–rights impact assessment is required before deploying a high-risk AI system. The AIA also requires providers of high-risk AI systems to put a quality management system in place that is memorialized in written policies, procedures, and instructions designed to comply with the requirements of the AIA.

Accordingly, written policies addressing risk are recommended as a key component of an AI governance program.

C. Governance Program

But perhaps the most significant change with the modern AI regulations is the indication that a full program, rather than simply a policy, is required to manage AI risks. The CAIA requires implementing both a “risk management policy and program to govern the deployer’s deployment of the high-risk artificial intelligence system.” “The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate” risks. Further, rather than a point in time, the program “must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk [AI] system.” For entities implementing a risk management framework, Colorado endorses the NIST AI RMF, ISO/IEC 42001, or another nationally recognized framework. Like the CAIA, the AIA risk management system needs to be iterative in nature and requires “regular systematic review and updating.”

D. Testing and Auditing

As noted, a program requires an iterative process. To gather the information needed to confirm whether the assumptions underlying the initial approval of the AI usage remain correct, laws reference testing and auditing of those AI systems. The CAIA requires any developer of a high-risk AI systems to disclose to the attorney general and all known deployers of the high-risk AI system any known or reasonably foreseeable risks that “the developer discovers through the developer’s ongoing testing and analysis that the developer’s high-risk AI system has been deployed or has caused or is reasonably likely to have caused algorithmic discrimination.” The CAIA also requires deployers to review the deployment of each high-risk AI system to ensure it does not cause algorithmic discrimination. The law further provides an affirmative defense to a developer, deployer, or (mysteriously) “[an]other person,” when they discover and cure a violation of the law as a result of “adversarial testing or red teaming.”

The AIA also requires testing for a high-risk AI system. It requires testing to identify effective risk management measures and that systems perform as intended against “prior defined metrics and probabilistic thresholds.” That testing shall be performed prior to the system being placed onto the market, if not “throughout the development process.” For those systems that require training of AI models, the AIA imposes additional testing requirements related to those data sets and data governance measures to ensure appropriate collection, processing, and robustness of data, among other things.

Accordingly, testing and auditing round out the key components of an AI system that can be drawn from the AIA and CAIA.

III. Filling in the Gaps with Insurance Regulations

Despite the depth of the AIA and the coverage of the CAIA, critical components of a comprehensive AI governance program are unaddressed by those laws. In particular, missing from those laws are requirements regarding the structure of a decision-making process within an entity that identifies the individuals and entities that will oversee the AI system, including all internal policies and procedures.

One regulatory framework that may provide such guidance applies to the insurance industry. Multiple state regulators of insurance, as well as the National Association of Insurance Commissioners, have issued guidance regarding the use of AI in certain insurance practices. In addition, perhaps we should not be surprised, Colorado enacted a statute regulating the use of AI by the insurance industry years before passing the CAIA. On July 6, 2021, Colorado’s governor signed into law the Protecting Consumers from Unfair Discrimination in Insurance Practices Act. The law regulates certain insurance practices involving defined types of data sets (“external consumer data and information sources” (ECDIS)) and algorithms or predictive models that result in unfair discrimination based on certain protected classes.

On September 21, 2023, the Colorado Division of Insurance (CDOI) issued regulations that provide further detail on how insurers within its scope can employ artificial algorithms and predictive models that use ECDIS. The current regulations address insurers that offer individual life insurance products, but the expectation is that future regulations will address other types of products (e.g., other life products, auto, health).

The law and its accompanying regulations from the CDOI require a comprehensive AI governance program addressing “algorithmic discrimination” in certain insurance practices. Central features of the law, like the AIA and CAIA, are the implementation of a risk management framework, vendor requirements, consumer disclosures, testing and auditing, and reporting. However, that law also highlights key missing components that make an AI compliance program comprehensive and sustainable, which remains unaddressed in the CAIA or AIA.

First, the regulations address creating an inventory of governed systems. As any company implementing AI governance practices quickly learns, this step is foundational and required as part of establishing appropriate policies and practices. The CDOI AI regulations specifically require a “[d]ocumented up-to-date inventory, including version control, of all utilized ECDIS, as well as algorithms and predictive models that use ECDIS, including a detailed description of each ECDIS, algorithm, and predictive model, their clearly stated purpose(s), and the outputs generated through their use.” Further, there must be a documented explanation of any material change in the inventory and the rationale for the change.

Second, and critically, the CDOI regulations address implementing a governance and decision-making structure. The regulations set forth a multi-tiered governance structure. At the top tier, the risk management framework “must be overseen by the board of directors or a committee of the board.” The next tier down is senior management, who have “responsibility and accountability for setting and monitoring the overall strategy and providing direction governing the use” of the governed data sets and AI systems. At the foundational tier, there must be a “[d]ocumented cross-functional . . . governance group composed of representatives from key functional areas including legal, compliance, risk management, product development, underwriting, actuarial, data science, marketing, and customer service, as applicable.” Lastly, there are to be “assigned roles and responsibilities . . . for the design, development, testing, deployment, use, and ongoing monitoring” of the governed data sets and AI systems. Additionally, policies and procedures must include “an ongoing internal supervision and training program for relevant personnel on the responsible and compliant use of ” governed systems and data.

Third, though the other AI laws reference vendor requirements, the CDOI regulations go one step further and require a “[d]ocumented description of the process used for selecting external resources including third party vendors.” A vendor management program typically goes a few steps further, but a written policy is a critical piece, especially when paired with an internal classification process and either a vendor addendum or questionnaire.

Although the CDOI AI regulations apply to a narrow set of companies in one corner of the United States, they provide needed guidance to round out a comprehensive compliance program.

IV. Best-in-Class AI Governance Programs

Although a one-size-fits-all program will not be viable for companies based on the wide variety of use cases, industries, sophistication, and technologies available in the AI space, practitioners can draw together a set of components recommended to create a best-in-class comprehensive AI governance program from the requirements set forth in these laws. To wit, a comprehensive program will typically include the following components:

  • First, create an inventory of uses of AI systems and establish processes to keep that inventory updated for changes in the usage of AI systems.
  • Second, establish a risk management framework that sets forth the entity’s structure for evaluating the risks and rewards for particular AI usage.
  • Third, establish written policies and procedures that set forth the ground rules for using existing AI and evaluating new usage of AI.
  • Fourth, implement ongoing testing and auditing of AI to monitor whether the assumptions underlying the initial approval of AI usage remain accurate.
  • Fifth, manage third-party vendors who supply AI systems and components as a critical aspect of an AI governance program.
  • And lastly, establish a decision-making structure within the entity that identifies the individuals and other parties that will provide oversight of the internal policies and procedures.

By implementing each of these components as part of an AI governance program, companies will be well-suited to be agile as AI continues to expand its reach within the corporate world.

V. Conclusion

As the usage of AI in its varying forms is rapidly changing, legislators are trying (and some might say failing) to keep pace. Companies that implement a compliance strategy—that is, checking the box for applicable requirements as laws become effective—will undoubtedly fall behind. A more productive approach is to utilize best practices drawn from experience and laws that apply across various industries and jurisdictions to build an AI governance program that allows the company to set forth a sustainable program that, with minor tweaking, will be able to withstand changes dictated by future AI legislation as it gets passed.

The author wishes to thank associate Mackenzie Cannon and law clerk Ayesha Akhtar for their assistance with this survey.

    Author