Since the release of the AI RMF 1.0, NIST recognized that generative AI may present additional challenges, so it formed public working groups to address these challenges and update the AI RMF 1.0 accordingly.
On the regulatory side, as we reported last year, the Federal Trade Commission (FTC) has been active in terms of regulating AI. Because of the lack of any specific authority over AI, the FTC has used section 5 of the FTC Act, as well as specific statutes for which it is the enforcer, such as the Children’s Online Privacy Protection Act (COPPA), as the lever to force violators of privacy laws to delete AI models built from improperly collected data. Following up on numerous warnings issued in 2021, the FTC, for the third time, carried out what has been called “death for algorithms.” In a March 2022 settlement, WW International, Inc. (formerly known as Weight Watchers) was enjoined from collecting, disclosing, using, or benefitting from children’s personal information collected without parental consent, which it allegedly had been doing in violation of COPPA.
The FTC began the process to move towards actual authority for AI governance with its advanced notice of public rule-making (ANPR) regarding a new Trade Regulation Rule on Commercial Surveillance and Data Security. In what has been called “one of the most ambitious rulemaking processes in agency history,” the FTC seeks to remake much of the online portions of the U.S. economy, including proposing new requirements on data minimization, data security, algorithmic discrimination and ethical AI. The ANPR poses dozens of questions on many subjects, including automated decision-making, such as:
- How prevalent is algorithmic error?
- To what extent is algorithmic error inevitable? If it is inevitable, what are the benefits and costs of allowing companies to employ automated decision-making systems in critical areas, such as housing, credit, and employment?
- To what extent can companies mitigate algorithmic error in the absence of new trade regulation rules?
- What are the best ways to measure algorithmic error?
- To what extent, if at all, should new rules require companies to take specific steps to prevent algorithmic errors?
The FTC’s ANPR is merely the beginning of a very long process. While most federal agencies can promulgate new rules in a year or two while complying with the Administrative Procedure Act, the FTC—when not specifically directed by Congress to promulgate a rule—must comply with the Magnuson-Moss rule-making process, which, on average, takes almost six years to complete. Please stay tuned for further developments in our 2028 update.
The Equal Employment Opportunity Commission (EEOC) saw Commissioner Keith Sonderling lead the charge to spread the word on the dangers of algorithmic bias. On May 12, 2022, the EEOC released its first substantive guidance: The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (ADA Guidance). The ADA Guidance list three ways in which AI systems could violate the ADA:
- The employer does not provide a “reasonable accommodation” that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm.
- The employer relies on an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability.
- The employer adopts an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examinations.
The ADA Guidance also makes it clear that an employer can be held accountable under the ADA for the use of algorithmic decision-making tools that are designed or administered by third parties or vendors. The ADA Guidance ends with a series of recommendations for employers on avoiding ADA violations: provide reasonable accommodations, minimize the chances that algorithmic decision-making tools will disadvantage or assign poor performance ratings to individuals with disabilities, either intentionally or unintentionally, and perform proper due diligence before purchasing such tools.
Finally, in September 2022, the FDA issued final guidance on Clinical Decision Support Software, which came three years after its draft guidance and almost five years beyond Congress’ mandate.
In terms of state and local laws, one notable advancement was the introduction of the first set of regulations that implement the Colorado Privacy Act (CPA), which feature specific requirements around profiling and automated decision-making. The new regulations cover “decisions that produce legal or similarly significant effects concerning a consumer,” which is defined broadly as “a decision that results in the provision or denial of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, health-care services, or access to essential goods or services.” The CPA requires that any organization engaging in such profiling assess whether that profiling presents a “reasonably foreseeable risk” of:
- 1. Unfair or deceptive treatment of, or unlawful disparate impact on, consumers;
- 2. Financial or physical injury to consumers;
- 3. A physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers if the intrusion would be offensive to a reasonable person; or
- 4. Other substantial injury to consumers.
If the organization finds such a risk, it must complete not just the ordinary, eighteen factor CPA data privacy assessment before processing any personal data, but also answer an additional dozen questions specific to profiling risks. Colorado is the first state to regulate based upon distinct levels of human involvement in algorithmic decision-making, creating three categories for such systems:
- 1. Human Involved Automated Processing, which means “the Automated Processing of Personal Data where human involvement in the Processing includes meaningful consideration of available data used in the Processing as well as the authority to change or influence the outcome of the Processing.”
- 2. Human Reviewed Automated Processing, which means “the Automated Processing of Personal Data where a human reviews the Processing, but the level of human review does not rise to the level required for Human Involved Automated Processing. Reviewing the output of the Automated Processing with no meaningful consideration does not rise to the level of Human Involved Automated Processing.”
- 3. Solely Automated Processing, which means “the Automated Processing of Personal Data with no human review, oversight, involvement, or intervention.”
Organizations that engage in the last two categories of processing, which involve the least amount of human involvement, must automatically grant any request to opt-out. Those that use the first category may only avoid this requirement if they provide, within their public privacy policies, “plain language explanation of the logic used in the Profiling process.” Finally, organizations must be prepared to hand over their regular and profiling-specific data privacy assessments to the Colorado Attorney General within thirty days of demand.
In December 2022, New York City proposed rules to implement Local Law 144, which requires bias audits of automated employment decision-making tools (AEDTs), which was passed in 2021, and which became effective on July 5, 2023. The rules would require any employer that uses an AEDT to make its bias audit public, to provide notice of that use to New York City applicants, and to provide an alternative method for applying. The audit would have to be conducted by an independent auditor and provide the data necessary for a disparate impact assessment based upon the EEOC framework.
The rules, however, would require a bias audit only if the AEDT was:
- Solely responsible for making the employment decision;
- Weighted more heavily than other factors; or
- Used to override a decision made by humans.
Unfortunately, as experts have pointed out, this restriction would severely limit the application of Local Law 144, to the extent that some have said that it fatally weakened an already weak law. Other experts carefully detailed that the law’s scoring formula may give a blatantly-biased algorithm a passing grade, may make some unbiased models seem biased, and may utterly fail in certain edge cases.
Canada
On June 16, 2022, the Canadian Federal Government introduced Bill C-27, which included three principal parts. The bill would comprehensively overhaul Canada’s existing privacy legislation, establish a tribunal to adjudicate and enforce privacy breaches, and create Canada’s first regulation of AI (Artificial Intelligence and Data Act or AIDA).
While the EU AIA is a prescriptive regulation, the AIDA is principles based and would require providers of AI systems to undertake impact assessments to mitigate potential harms, continuously monitor their performance, and comply with public disclosure obligations. Unlike the EU AIA, the AIDA does not include a risk-based classification of AI systems and does not mandate conformance testing before placing AI systems on the market. Unlike the EU AIA, the AIDA omits any detailed definition of “high-impact systems,” which would be addressed in future regulations. Nor does the AIDA prohibit certain classes of AI systems.
The AIDA provides for administrative fines in the event of non-compliance, which may be as high as CAN$10 million or three percent of annual gross global revenue. In addition, the bill criminalizes any AI developer’s use of unlawfully obtained “personal information” and any AI output that results in serious physical or psychological harm. Contravention of these provisions may result in, for a business, a fine of up to CAN$25 million or five percent of its annual gross global revenue, or, for a natural person, imprisonment up to five years.
Conclusion
2022 started with a quiet yet steady drive towards a consensus that AI should be regulated by law. 2022 ended with the introduction and explosive growth of generative AI systems, like ChatGPT, that finally made it crystal clear that such regulation was necessary. That trend continued well into 2023. Who should regulate AI and how it should be regulated remains to be determined. 2022 might seem, in terms of regulating AI, somewhat boring. 2023, however, has been tumultuous in terms of AI regulation and we seem destined for even more challenges in 2024. Perhaps we will look back on 2022 somewhat wistfully.