chevron-down Created with Sketch Beta.

The Business Lawyer

Winter 2023/2024 | Volume 79, Issue 1

Year in Review for AI Governance and Regulation

Andrew Pery and Michael Scott Simon

Summary

  • In 2022 and early 2023, we witnessed unprecedented advances in Artificial Intelligence (AI) technology, particularly the transformative impact of generative AI.
  • The development of the law governing AI largely lagged in 2022, with only limited concrete developments and substantial disappointments.
  • Key positive developments included the US AI Bill of Rights, the NIST AI Risk Management Framework, the New York City AEDT law, and continued efforts by certain federal agencies, especially the FTC, to find effective market enforcement mechanisms.
  • While 2022 was something of a transition year for AI and the Law, 2023, however, has been tumultuous and we seem destined for even more challenges in 2024.
Year in Review for AI Governance and Regulation
iStock.com/real444

Jump to:

Unprecedent AI Technological Developments

In 2022 and early 2023, we witnessed unprecedented advances in Artificial Intelligence (AI) technology, providing opportunities and posing dangers. No advance is more consequential than the transformative impact of generative AI, which some experts believe has the potential to replicate human cognitive intelligence. The scale of generative AI adoption has been much faster than any other disruptive technology. Following its launch in 2022, ChatGPT-4 needed a mere five days to reach one million users, compared to two-and-a-half months for Instagram and ten months for Facebook.

Despite the rapid progress of AI, many challenges remain. Propensity for bias underlies the risks posed by generative AI. Generative AI is defined, at least by ChatGPT, as:

artificial intelligence that can autonomously create original content, such as images, text, or music, based on patterns and examples it has learned from existing data. This marks a departure from earlier forms of AI that primarily focused on rule-based or predefined responses, enabling the generation of novel and diverse outputs.

Its utility notwithstanding, Sam Altman, CEO of OpenAI, the company behind ChatGPT, acknowledged the inherent dangers of generative AI: “If this technology goes wrong, it can go quite wrong.” Despite the rapid progress of AI, many challenges remain. Significant concerns are evident relating to the trustworthiness and security of AI; “large language models . . . , such as ChatGPT, . . . can discriminate unfairly and perpetuate stereotypes and social biases, use toxic language (for instance inciting hate or violence), present a risk for personal and sensitive information, provide false or misleading information, increase the efficacy of disinformation campaigns, and cause a range of human-computer interaction harms.”

The legal profession also began to express concerns about generative AI. Articles, such as the alarmingly titled Will ChatGPT Make Lawyers Obsolete? (Hint: Be Afraid), began to stir concerns about our continued role as the sole interpreters of the law and drafters of legal documents—concerns that continue into 2023 and likely well beyond.

Calls for AI regulation predate the technological advances of generative AI. Its emergence exacerbated a sense of urgency, necessitating the introduction of meaningful guardrails that protect against its harmful impacts. Affected parties acknowledge the need for increased collaboration between industry and regulators to curtail its potential for harms. AI governance challenges span socio-economic and legal boundaries. AI has become pervasive, impacting virtually every facet of our lives. AI is forecast to contribute nearly $16 trillion to the global economy by the end of the decade.

However, the development of the law governing AI lagged in 2022, with only limited concrete developments and substantial disappointments, including delayed effective dates for certain newly enacted laws and stymied legislative bills. In early 2023, this sense of urgency was evident at the G7 conference in Hiroshima, where the leader’s communique emphasized the need for regulation of AI:

We recognize that, while rapid technological change has been strengthening societies and economies, the international governance of new digital technologies has not necessarily kept pace. As the pace of technological evolution accelerates, we affirm the importance to address common governance challenges and to identify potential gaps and fragmentation in global technology governance.

There Was Limited AI Regulation in 2022, But 2023 May Be the Beginning of a Whole New Era

AI system capabilities grew at an incredible rate in 2022, but this was still barely reflected in the law, in terms of an ability or even willingness to govern and regulate it. In that way, 2022 was something of a transition year; there was some movement toward greater regulation of AI, but many of those developments did not become fully enabled until 2023. The days where lawyers can ignore the need to regulate AI are clearly waning. Legal certainty is becoming an imperative—“The question business leaders should be focused on at this moment . . . is not how or even when AI will be regulated, but by whom.” “Whether Congress, the European Commission, China, or even U.S. states or courts take the lead will determine both the speed and trajectory of AI’s transformation of the global economy, potentially protecting some industries or limiting the ability of all companies to use the technology to interact directly with consumers.”

Below are the developments worth noting for the survey year regarding AI governance and regulation in the European Union, the United States, and Canada.

The European Union

In last year’s survey, Regulation of Artificial Intelligence, we reviewed the EU’s proposed Artificial Intelligence Act (AIA), which was introduced on April 4, 2021. To summarize, the EU AIA is a game-changer, as it is a comprehensive risk-based regulation that would impose prescriptive obligations on providers of AI systems for specific categories of prohibited and high-risk AI systems, all backed up by onerous enforcement mechanisms and administrative powers.

Given advances in the state of the art of AI systems, the EU Parliament drafted for adoption a number of important amendments in its compromise text. First, the AIA’s original definition of “AI system” arguably was too broad. The compromise text aligns the definition of “AI system” with the narrower OECD definition, which focuses on AI systems whose outputs may adversely impact the health, safety, privacy and economic rights of users, rather than automated decision-making as a key criterion that originally brought AI systems under the purview of the AIA.

Second, the list of prohibited AI systems was broadened to include “real-time” remote biometric identification systems in publicly accessible spaces; biometric categorization systems using sensitive characteristics; predictive policing systems; emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

Third, the AIA originally provided that an AI system could be classified as “high risk” (1) based on two specified conditions, or (2) by the Commission’s designation (in Annex III) based on specified criteria. High-risk AI systems would be required to be strictly tested, monitored, and documented. The compromise text would revise the AIA’s original text related to Annex III: “AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. Where an AI system falls under Annex III point 2, it shall be considered high-risk if it poses a significant risk of harm to the environment.” The categories of high-risk systems have been expanded to include biometric identification and categorization; critical infrastructure; education, vocational training, and employment; law enforcement, immigration, and the administration of justice; and democratic processes. To ensure compliance with this enhanced obligation, providers will be required to conduct AI impact assessments, continuously monitor their performance, and remediate deviations from the intended purposes of AI system outputs.

Fourth, as the EU’s original drafts of the AIA did not address generative AI, which had not yet proliferated, the compromise text now addresses generative AI under the term “foundation models.” The compromise text includes provisions that impose specific obligations on providers of foundation models, including the obligation to “design and develop the foundation model in order to achieve throughout its lifecycle appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity assessed through appropriate methods such as model evaluation with the involvement of independent experts, documented analysis, and extensive testing during conceptualisation, design, and development.” Moreover, the provider of a foundation model must mitigate reasonably foreseeable risks to health, safety, and fundamental rights prior to releasing it for commercial use. Furthermore, any provider of a foundation model must develop detailed documentation and instructions as to its behavior and outputs. A provider of a foundation model will be required to label and inform users that content is created by generative AI applications. In addition, adherence to the European Commission’s 2022 Code of Practice on Disinformation, which was executed by thirty-four signatories, provides the basis for harmonized commitments to mitigate potential adverse outcomes arising from the use of generative AI applications.

Finally, it is important to note that a key element of the EU AIA is delegation of conformity assessments relating to high-risk AI systems to “notified bodies.” “Notified bodies” are certified auditors designated by national authorities to conduct independent assessment of AI systems, based on sanctioned AI risk management frameworks.

The lack of follow-up on promises for standards made in prior regulations, such as GDPR, has left critical voids in enforcement and compliance. Concerns have arisen that this problem will be repeated with the AIA, particularly because it would create an even wider gap for operationalizing AIA standards than with the GDPR. While the EU has requested Joint Technical Committee (“JTC”) 21 of the CEN/CENLEC, the leading EU standards body, to develop a formal risk management standard, availability is not expected until at least 2025. However, at least one group, ForHumanity, has already prepared certification training based upon JTC 21 requirements that will be ready in time for the approval of the AIA. As discussed below, the AI Risk Management Framework (AI RMF 1.0)—launched by the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST)—may also provide additional necessary standards for operationalizing the EU’s AIA.

United States

In terms of federal regulation of AI, 2022 was little different from prior years on the legislative front: A number of AI-focused bills were introduced, but none approached enactment. The Biden Administration did introduce the Blueprint for an AI Bill of Rights, which has been described by its primary author as “a framework about how laws that we have on the books might be enforced, about how existing rule-making authorities might be used.” In addition to detailed technical guidelines regarding their implementation, the AI Bill of Rights focuses on five principles:

  • 1. Safe and Effective Systems
  • 2. Algorithmic Discrimination Protections
  • 3. Data Privacy
  • 4. Notice and Explanation, and
  • 5. Human Alternatives, Consideration, and Fallback.

While the AI Bill of Rights does not constitute a formally promulgated set of rules, it does represent the culmination of a long process that included consultation with stakeholders throughout the government, industry, and society, and a “major step” in U.S. government AI policy toward understanding AI governance as a civil rights issue.

While the AI Bill of Rights represents a high-level, highly aspirational document, the AI RMF 1.0 provides a practical on-the-ground guide to operationalizing any legislative bill inspired by the AI Bill of Rights. As well, the AI RMF 1.0 could also inform the standard for much of the compliance work to be done with the AIA. The AI RMF 1.0 is a comprehensive framework “designed to equip organizations and individuals … with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time.”

The AI RMF 1.0 is centered around four “core” concepts: Govern, Map, Measure, and Manage, as illustrated below:

Image Detail

Image Detail

Each of these core concepts has a number of categories and sub-categories, including six categories for Govern, the task most likely to be of interest to a legal audience. Of particular interest will likely be the first category: “GOVERN 1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.” That category and the related subcategories appear in the table below.

Categories and Subcategories for the GOVERN Function
Categories Subcategories
GOVERN 1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively. GOVERN 1.1: Legal and regulatory requirements involving AI are understood, managed, and documented.
  GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
  GOVERN 1.3: Processes, procedures, and practices are in place to determine the needed level of risk management activities based on the organization’s risk tolerance.
  GOVERN 1.4: The risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities.
  GOVERN 1.5: Ongoing monitoring and periodic review of the risk management process and its outcomes are planned and organizational roles and responsibilities clearly defined, including determining the frequency of periodic review.
  GOVERN 1.6: Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities.
  GOVERN 1.7: Processes and procedures are in place for decommissioning and phasing out AI systems safely and in a manner that does not increase risks or decrease the organization’s trustworthiness.

The sixth category of the Govern concept will also likely be of particular importance to legal teams, whether in-house or outside counsel to companies looking to implement AI licensing agreements in accordance with the framework: “Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues.” That category and the related subcategories appear in the table below.

Categories and Subcategories for the GOVERN Function
Categories Subcategories
GOVERN 6: Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues. GOVERN 6.1: Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third-party’s intellectual property or other rights.
  GOVERN 6.2: Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be high-risk.

Since the release of the AI RMF 1.0, NIST recognized that generative AI may present additional challenges, so it formed public working groups to address these challenges and update the AI RMF 1.0 accordingly.

On the regulatory side, as we reported last year, the Federal Trade Commission (FTC) has been active in terms of regulating AI. Because of the lack of any specific authority over AI, the FTC has used section 5 of the FTC Act, as well as specific statutes for which it is the enforcer, such as the Children’s Online Privacy Protection Act (COPPA), as the lever to force violators of privacy laws to delete AI models built from improperly collected data. Following up on numerous warnings issued in 2021, the FTC, for the third time, carried out what has been called “death for algorithms.” In a March 2022 settlement, WW International, Inc. (formerly known as Weight Watchers) was enjoined from collecting, disclosing, using, or benefitting from children’s personal information collected without parental consent, which it allegedly had been doing in violation of COPPA.

The FTC began the process to move towards actual authority for AI governance with its advanced notice of public rule-making (ANPR) regarding a new Trade Regulation Rule on Commercial Surveillance and Data Security. In what has been called “one of the most ambitious rulemaking processes in agency history,” the FTC seeks to remake much of the online portions of the U.S. economy, including proposing new requirements on data minimization, data security, algorithmic discrimination and ethical AI. The ANPR poses dozens of questions on many subjects, including automated decision-making, such as:

  • How prevalent is algorithmic error?
  • To what extent is algorithmic error inevitable? If it is inevitable, what are the benefits and costs of allowing companies to employ automated decision-making systems in critical areas, such as housing, credit, and employment?
  • To what extent can companies mitigate algorithmic error in the absence of new trade regulation rules?
  • What are the best ways to measure algorithmic error?
  • To what extent, if at all, should new rules require companies to take specific steps to prevent algorithmic errors?

The FTC’s ANPR is merely the beginning of a very long process. While most federal agencies can promulgate new rules in a year or two while complying with the Administrative Procedure Act, the FTC—when not specifically directed by Congress to promulgate a rule—must comply with the Magnuson-Moss rule-making process, which, on average, takes almost six years to complete. Please stay tuned for further developments in our 2028 update.

The Equal Employment Opportunity Commission (EEOC) saw Commissioner Keith Sonderling lead the charge to spread the word on the dangers of algorithmic bias. On May 12, 2022, the EEOC released its first substantive guidance: The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (ADA Guidance). The ADA Guidance list three ways in which AI systems could violate the ADA:

  • The employer does not provide a “reasonable accommodation” that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm.
  • The employer relies on an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability.
  • The employer adopts an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations.

The ADA Guidance also makes it clear that an employer can be held accountable under the ADA for the use of algorithmic decision-making tools that are designed or administered by third parties or vendors. The ADA Guidance ends with a series of recommendations for employers on avoiding ADA violations: provide reasonable accommodations, minimize the chances that algorithmic decision-making tools will disadvantage or assign poor performance ratings to individuals with disabilities, either intentionally or unintentionally, and perform proper due diligence before purchasing such tools.

Finally, in September 2022, the FDA issued final guidance on Clinical Decision Support Software, which came three years after its draft guidance and almost five years beyond Congress’ mandate.

In terms of state and local laws, one notable advancement was the introduction of the first set of regulations that implement the Colorado Privacy Act (CPA), which feature specific requirements around profiling and automated decision-making. The new regulations cover “decisions that produce legal or similarly significant effects concerning a consumer,” which is defined broadly as “a decision that results in the provision or denial of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, health-care services, or access to essential goods or services.” The CPA requires that any organization engaging in such profiling assess whether that profiling presents a “reasonably foreseeable risk” of:

  • 1. Unfair or deceptive treatment of, or unlawful disparate impact on, consumers;
  • 2. Financial or physical injury to consumers;
  • 3. A physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers if the intrusion would be offensive to a reasonable person; or
  • 4. Other substantial injury to consumers.

If the organization finds such a risk, it must complete not just the ordinary, eighteen factor CPA data privacy assessment before processing any personal data, but also answer an additional dozen questions specific to profiling risks. Colorado is the first state to regulate based upon distinct levels of human involvement in algorithmic decision-making, creating three categories for such systems:

  • 1. Human Involved Automated Processing, which means “the Automated Processing of Personal Data where human involvement in the Processing includes meaningful consideration of available data used in the Processing as well as the authority to change or influence the outcome of the Processing.”
  • 2. Human Reviewed Automated Processing, which means “the Automated Processing of Personal Data where a human reviews the Processing, but the level of human review does not rise to the level required for Human Involved Automated Processing. Reviewing the output of the Automated Processing with no meaningful consideration does not rise to the level of Human Involved Automated Processing.”
  • 3. Solely Automated Processing, which means “the Automated Processing of Personal Data with no human review, oversight, involvement, or intervention.”

Organizations that engage in the last two categories of processing, which involve the least amount of human involvement, must automatically grant any request to opt-out. Those that use the first category may only avoid this requirement if they provide, within their public privacy policies, “plain language explanation of the logic used in the Profiling process.” Finally, organizations must be prepared to hand over their regular and profiling-specific data privacy assessments to the Colorado Attorney General within thirty days of demand.

In December 2022, New York City proposed rules to implement Local Law 144, which requires bias audits of automated employment decision-making tools (AEDTs), which was passed in 2021, and which became effective on July 5, 2023. The rules would require any employer that uses an AEDT to make its bias audit public, to provide notice of that use to New York City applicants, and to provide an alternative method for applying. The audit would have to be conducted by an independent auditor and provide the data necessary for a disparate impact assessment based upon the EEOC framework.

The rules, however, would require a bias audit only if the AEDT was:

  • Solely responsible for making the employment decision;
  • Weighted more heavily than other factors; or
  • Used to override a decision made by humans.

Unfortunately, as experts have pointed out, this restriction would severely limit the application of Local Law 144, to the extent that some have said that it fatally weakened an already weak law. Other experts carefully detailed that the law’s scoring formula may give a blatantly-biased algorithm a passing grade, may make some unbiased models seem biased, and may utterly fail in certain edge cases.

Canada

On June 16, 2022, the Canadian Federal Government introduced Bill C-27, which included three principal parts. The bill would comprehensively overhaul Canada’s existing privacy legislation, establish a tribunal to adjudicate and enforce privacy breaches, and create Canada’s first regulation of AI (Artificial Intelligence and Data Act or AIDA).

While the EU AIA is a prescriptive regulation, the AIDA is principles based and would require providers of AI systems to undertake impact assessments to mitigate potential harms, continuously monitor their performance, and comply with public disclosure obligations. Unlike the EU AIA, the AIDA does not include a risk-based classification of AI systems and does not mandate conformance testing before placing AI systems on the market. Unlike the EU AIA, the AIDA omits any detailed definition of “high-impact systems,” which would be addressed in future regulations. Nor does the AIDA prohibit certain classes of AI systems.

The AIDA provides for administrative fines in the event of non-compliance, which may be as high as CAN$10 million or three percent of annual gross global revenue. In addition, the bill criminalizes any AI developer’s use of unlawfully obtained “personal information” and any AI output that results in serious physical or psychological harm. Contravention of these provisions may result in, for a business, a fine of up to CAN$25 million or five percent of its annual gross global revenue, or, for a natural person, imprisonment up to five years.

Conclusion

2022 started with a quiet yet steady drive towards a consensus that AI should be regulated by law. 2022 ended with the introduction and explosive growth of generative AI systems, like ChatGPT, that finally made it crystal clear that such regulation was necessary. That trend continued well into 2023. Who should regulate AI and how it should be regulated remains to be determined. 2022 might seem, in terms of regulating AI, somewhat boring. 2023, however, has been tumultuous in terms of AI regulation and we seem destined for even more challenges in 2024. Perhaps we will look back on 2022 somewhat wistfully.

    Authors