III. The EU Is Next Out of the Gate
On April 21, 2021, the European Commission released its much-anticipated draft Artificial Intelligence Act (“AIA”), the goal of which is to balance the socio-economic benefits of AI and “new risks or negative consequences for individuals or the society.” The AIA is predicated on a risk-based model that recognizes the socio-economic benefits of AI, while at the same time provides legal certainty through prescriptive obligations for certain categories of AI systems that pose harm, reinforced by meaningful administrative fines in the event of non-compliance. While there have been some proposed amendments to the AIA, mainly to better focus the definition of “AI,” there seems to be little doubt that this law will pass in some form.
The AIA provides for a broad jurisdictional scope that encompasses both AI providers and users. But the list of covered entities does not stop there, as the AIA goes on to cover distributors, importers, operators, manufacturers, and even the broad catch-all of any “other third part[ies],” who place or make available on the market or put into service AI systems. For providers placing on the market or putting into service AI systems in the EU and where the output produced by the AI system is used in the EU, the AIA will apply irrespective of whether they are established within the EU or outside of it. The applicable rules depend upon a risk-based classification of AI systems: prohibited AI (e.g., AI systems that exploit vulnerable groups), high-risk AI (e.g., AI systems used for recruitment, creditworthiness, and eligibility for social services), and other AI systems (e.g., chat bots and emotion detection systems).
The draft AIA imposes rigorous compliance obligations upon providers with regard to high-risk AI systems, including conformity assessments and certification before placing the AI on the market. High-risk systems are defined in Article 6 of the draft and a listing of them is included in Annex III:
- Biometric identification and categorization of natural persons;
- Management and operation of critical infrastructure;
- Education and vocational training;
- Employment, workers management, and access to self-employment;
- Access to and enjoyment of essential private services and public services and benefits;
- Law enforcement;
- Migration, asylum, and border control management; and
- Administration of justice and democratic processes.
According to the proposal, AI providers would be required to meet additional obligations, which include implementing:
- A risk management system;
- Data governance;
- Technical documentation;
- Record keeping;
- Transparency and provision of information to users;
- Human oversight; and
- Responsible data science principles of data accuracy, confidentiality, integrity, fairness, robustness, and cybersecurity.
The AIA would provide for an enforcement scheme with penalties that make those under the General Data Protection Regulation look small by comparison:
- € 10 million or 2 percent of the total worldwide annual turnover for the supply of incorrect or incomplete information,
- € 20 million or 4 percent of the total worldwide annual turnover for non-compliance with the regulation’s requirements, and
- € 30 million or 6 percent of the total worldwide annual turnover for violations of prohibited practices.
Yet the biggest “hammer” that the authorities could wield may not be the actual fines, as the AIA would give enforcement authorities the ability to force off the market AI systems that pose risks “to the health or safety or to the protection of fundamental rights of persons.”
IV. Federal Agencies’ Sectoral Approach to Regulating AI
While the United States lags behind China and the EU in AI regulatory adoption, it is nonetheless taking meaningful steps, albeit largely on a sectoral basis.
A. Legislative Initiatives
On the legislative front, Senator Ron Wyden (D-Or.) introduced, on February 3, 2022, the Algorithmic Accountability Act of 2022 (“AAA”), which is a renewed version of a similar bill that he introduced in 2019 but which failed in committee. The AAA would authorize the Federal Trade Commission (“FTC”) to create regulations requiring AI impact assessments on fairness, accuracy, bias, and discrimination. Meanwhile, on June 3, 2022, a bipartisan group of politicians released a discussion draft of the American Data Privacy and Protection Act (“ADPPA”), which was later formally proposed as a bill in the House of Representatives. Section 207 of the ADPPA would address algorithmic bias and require large data holders that use algorithms to submit annual impact assessments to the FTC and consult with independent auditors on these assessments. Unfortunately, as of early September, it appears that the ADPPA will never reach a floor vote and most experts give the AAA low odds of becoming law either.
B. Office of Science and Technology Policy
The Biden Administration began to lay the foundation for a more cohesive AI regulatory policy framework. The White House Office of Science and Technology Policy (“OSTP”) solicited feedback as part of a process for developing a proposed AI Bill of Rights, designed to “give consumers a right to transparency and explainable AI, a technology approach that provides insight into algorithmic processes” and to serve “as the basis for regulation and legislation.” Furthermore, the OSTP and the National Science Foundation launched the National Artificial Intelligence Research Resource Task Force, with the goal of “democratiz[ing]” AI research.
C. Federal Trade Commission
The FTC has also stepped into the AI arena. On April 19, 2021, an FTC blog post titled “Aiming for truth, fairness, and equity in your company’s use of AI” set off alarm bells in the tech industry by articulating six guidelines for doing AI right—or at least so as to avoid FTC action.
In what one reporter has called “death for algorithms,” during the last three years, the FTC has required violators of various data privacy–oriented laws under its purview not only to delete data that was improperly collected, but also to delete AI models built from that ill-gotten data. The first such order came in connection with the infamous Cambridge Analytica scandal. The next such strike was the 2020 Everalbum Consent Order, requiring respondents to delete facial recognition data that was collected and used without proper user consent and the machine learning models derived from that data. The third strike came in March 2022, when the FTC entered into a settlement order with WW International, Inc. (“WW,” formerly known as Weight Watchers) in connection with a weight loss app that collected personal information from children as young as eight, without parental permission, in violation of the Children’s Online Privacy Protection Act. In addition to fining WW $1.5 million, the FTC required the deletion of “any models or algorithms” created from the child-derived data.
D. Consumer Financial Protection Bureau
In October 2021, the Consumer Financial Protection Bureau (“CFPB”) made one of its first moves toward a more vigorous regulatory approach regarding FinTech, with an inquiry into the “Big Tech” payment platforms. The CFPB issued orders to six U.S. companies—Google, Apple, Facebook, Amazon, Square, and PayPal—requiring them to turn over information about their products, plans, and practices on payments, and announced that it “will also study the practices” of Chinese tech giants offering payment services. The CFPB stated that it was concerned that the consumer-spending data that are naturally collected within these platforms “can be monetized . . . to profit from behavioral targeting.”
E. U.S. Equal Employment Opportunity Commission
On October 28, 2021, the U.S. Equal Employment Opportunity Commission (“EEOC”) launched an initiative to explore the impact of AI on employment decision-making. In doing so, the EEOC continued the work begun earlier in the year by one of its commissioners, Keith Sonderling, who warned of the potential dangers of biased AI.
The EEOC released its first guidance on May 12, 2022, focusing on compliance requirements for using algorithmic decision-making tools in the context of the Americans with Disabilities Act. The EEOC also released guidance, aimed at employees, that emphasizes employers’ responsibility to undertake impact assessments to mitigate algorithmic bias and determine whether the algorithms may potentially adversely impact individuals with disabilities. The publication of the guidance was timely, as almost contemporaneously, the EEOC launched litigation against iTutorGroup, Inc. in the U.S. District Court for the Eastern District of New York, claiming that the defendant violated the Age Discrimination in Employment Act “by programming its online recruitment software to automatically reject older applicants because of their age.” In a press release, EEOC Chair Charlotte A. Burrows stated: “Age discrimination is unjust and unlawful. Even when technology automates the discrimination, the employer is still responsible. . . . This case is an example of why the EEOC recently launched an Artificial Intelligence and Algorithmic Fairness Initiative.”
F. U.S. Department of Housing and Urban Development
While other federal agencies are gearing up to regulate AI, the U.S. Department of Housing and Urban Development (“HUD”) is notable for what it will not be doing; that is, it will no longer “effectively encourage discrimination by algorithm,” in the words of prominent AI ethics expert Andrew Selbst.
To make sense of this story, we need to start back in 2013, when HUD formally extended the disparate impact standard of Griggs v. Duke Power Co. to FHA actions (the “2013 Rule”). In August 2019, the Trump Administration published a Notice of Proposed Rulemaking to greatly revise the 2013 Rule (the “2019 Proposed Rule”) in a way that would allow lenders to deflect disparate impact claims based on the use of an algorithm as long as the inputs were not “substitutes or close proxies” for protected characteristics and as long as a “neutral third party” either developed or certified the fairness of the system. The public response to the 2019 Proposed Rule was blistering, with tens of thousands of negative comments coming in from a wide variety of sources. Even the initial positive comments from bankers and mortgage lenders were eventually withdrawn, replaced with pleas to scrap the proposal entirely.
Fair housing activists immediately launched lawsuits in three jurisdictions seeking an injunction against the implementation of the 2019 Proposed Rule. The first court to reach a decision on the injunction, in Massachusetts Fair Housing Center v. HUD, granted the request for an injunction. These issues all became moot with the announcement by the Biden Administration that HUD would re-examine the 2019 Proposed Rule. In June 2021, HUD filed a Notice of Proposed Rulemaking to eliminate the 2019 Proposed Rule and restore the 2013 Rule.
V. State and Local Government AI Regulatory Initiatives
Several states and major municipalities have enacted laws and ordinances in recent years to regulate AI.
In 2019, Illinois passed the Artificial Intelligence Video Interview Act (“AIVIA”), the first state law to regulate employers’ use of AI in the hiring process. In 2021, the AIVIA was amended to require any employer that relies solely upon AI analysis of a video interview in selecting whether to perform a follow-up in-person interview to annually collect and report data on the racial and ethnic demographics of the interviewees.
On July 6, 2021, Colorado enacted a law to protect consumers from unfair discrimination by insurance companies. The law bars insurers from using “algorithms or predictive models” that discriminate on the basis of “race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.” The law defines “algorithm” as “a computational or machine learning process that informs human decision making in insurance practices,” and defines “predictive model” as “a process of using mathematical and computational methods that examine current and historical data sets for underlying patterns and calculate the probability of an outcome.” Moreover, the law requires the Colorado Insurance Commissioner to adopt rules aimed at creating transparency by insurers using algorithms. The rules will require insurers to provide information and an explanation to the commissioner concerning the consumer data and information used in developing and implementing algorithms, establish a risk management framework to detect discrimination by algorithms, and provide an assessment of the results of the risk management framework. The commissioner may conduct investigations into how insurers use algorithms and predictive models.
Comprehensive privacy laws in California, Virginia, Colorado, and Connecticut contain language on “profiling” that could implicate AI governance issues. In California, the California Privacy Rights Act (“CPRA”), which was enacted when California voters passed Proposition 24 in 2020, amended the California Consumer Privacy Act (“CCPA”), with certain provisions not taking effect until January 1, 2023, adding several provisions specifically directed at profiling. CPRA defines “profiling” to mean “any form of automated processing of personal information . . . to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” Though the CCPA did not originally regulate profiling, the CPRA established the California Privacy Protection Agency (“CPPA”) and charged it with promulgating regulations governing, among other things, businesses’ use of profiling. The CPPA issued the first draft of those new regulations at the end of May 2022, but specifically tabled the regulations on automated decision-making due to the complexity of the issue.
The Virginia Consumer Data Protection Act (“VCDPA”) also contains a number of provisions relating to profiling. The statute defines “profiling” to mean “any form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” It allows consumers to opt out of processing of their personal data for the purpose of profiling “in furtherance of decisions that produce legal or similarly significant effects concerning the consumer” and requires data protection assessments before processing personal data for “purposes of profiling” that present foreseeable risks to consumers.
Colorado’s recently enacted Privacy Act also contains provisions relating to profiling that are very similar to those found in the VCDPA, as does the Connecticut Personal Data Privacy Act.
At the local government level, on November 10, 2021, the New York City Council passed a bill, effective January 1, 2023, requiring that a bias audit be undertaken on an automated employment decision tool prior to its use, with a summary of the results made public.
Finally, a number of cities, including San Francisco, Oakland, Berkeley, Portland (Oregon), Boston, and Minneapolis, have implemented bans on the use of facial recognition systems by police.
VI. Conclusion
With the proliferation of AI systems, the United States is stepping up its efforts to institute common sense regulation and to better align with international AI standards development and regulatory initiatives. A Brookings study on strengthening international cooperation on AI addresses several benefits of harmonizing AI regulation, including that cooperation between public and private sectors across national boundaries will create economies of scale that benefit everyone, and that cultivation of harmonized AI standards will foster a climate of trustworthy AI that will ameliorate consumer distrust and promote the growth of the digital economy.