But the tremendous opportunities afforded by AI do not come without risk. Title VII of the Civil Rights Act of 1964, which is the cornerstone federal employment discrimination law, does not contain statutory language specifically about the use of AI technologies, which did not emerge until several decades later. However, the U.S. Equal Employment Opportunity Commission (EEOC), the federal government agency responsible for enforcing Title VII, has made it a strategic priority to prevent and redress employment discrimination stemming from employers’ use of AI to make employment decisions regarding prospective and current employees.
Focusing on the EEOC’s pioneering efforts in this space, this article explores the risks of using AI in the employment context. First, this article examines the current litigation landscape with an in-depth case study analysis of the EEOC’s first AI discrimination lawsuit and settlement. Next, to figure out how we got here, the article travels back in time through the origins of the EEOC’s AI initiative to present-day outreach efforts. Finally, this article reads the EEOC’s tea leaves about the future of AI in the workplace, offering employers insight into how to best navigate the employment decision-making process when implementing this generation-changing technology.
A New Frontier: EEOC’s First AI Lawsuit
In 2022, the EEOC filed its first lawsuit involving AI software bias. In Equal Employment Opportunity Commission v. iTutorGroup, Inc., the charging party submitted an application to work for the three integrated defendants, who provided English-language tutoring services to students in China. The sole qualification to be hired as a tutor for the defendants was a bachelor’s degree. As part of the application process, applicants provided their date of birth on applications through the defendants’ website.
On March 29, 2020, the charging party—who was over the age of 55 at the time she submitted her online application to work for the defendants—provided her date of birth and was immediately rejected. On March 30, 2020, the charging party reapplied using a more recent date of birth and otherwise identical application information. After changing her birthday (and nothing else), the charging party was subsequently offered an interview. She then filed a charge of discrimination with the EEOC. The EEOC’s investigation revealed that the defendants’ software automatically rejected more than 200 other applicants aged 55 and over from the United States because of their age, even though they had bachelor’s degrees (or higher) and were thus otherwise qualified.
After conciliation failed, on May 5, 2022, the EEOC filed a lawsuit on behalf of the charging party. The EEOC alleged in its lawsuit that the three integrated defendants violated the Age Discrimination in Employment Act of 1967 (ADEA) by programming their hiring software to reject female applicants over 55 years old and male applicants over 60 years old. In response to the lawsuit, the defendants filed an answer to the complaint denying the EEOC’s allegations in their entirety and asserted numerous affirmative defenses. Throughout the litigation, the defendants denied all allegations of discrimination. The defendants further disputed, among other things, that the tutors were employees under the ADEA and other federal and state antidiscrimination laws, as opposed to independent contractors.
On August 9, 2023, the EEOC and the defendants filed a joint settlement agreement and consent decree in the U.S. District Court for the Eastern District of New York, memorializing their $365,000 settlement agreement. The consent decree confirmed that the parties’ $365,000 settlement would be distributed to tutor applicants who were allegedly rejected by the defendants because of their age, during the time period of March 2020 through April 2020. The consent decree further provided that the settlement payments would be split evenly between compensatory damages and back pay. Suffice to say, the monetary relief was significant for the allegedly aggrieved applicants.
In terms of nonmonetary relief, the consent decree provided a robust menu of obligations for the defendants, including the following: (1) enjoin the companies and all relevant personnel from future discriminatory acts; (2) provide a “Notice of Lawsuit and Resolution” to all individuals holding a C-level position with the defendants, the members of the board of directors, and the head of human resources for each defendant; (3) prepare a memorandum, to be approved by the EEOC and distributed to all employees, regarding the requirements of federal antidiscrimination laws, including prohibitions on age and sex discrimination in hiring; (4) prepare and provide antidiscrimination policies and complaint procedures applicable to screening, hiring, and supervision of tutors and tutor applicants; and (5) provide training programs on an annual basis for all supervisors and managers involved in the hiring process. The consent decree, which will remain in effect for five years, also contained reporting requirements and recordkeeping requirements. Perhaps most notably, the consent decree included a monitoring requirement, which allows the EEOC to inspect the premises and records of the defendants and conduct interviews with the defendants’ officers, agents, employees, and independent contractors to ensure compliance. For litigants in future EEOC-initiated AI software bias lawsuits, the case trajectory and settlement provide a first-of-its-kind road map.
How did the EEOC arrive at this unprecedented result? As discussed in greater detail below, this tremendous result was no accident.
EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative
To best understand the future of the EEOC’s AI evolution, one must first understand how the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative came to fruition.
AI has been on the EEOC’s enforcement radar since at least 2016. In 2021, the EEOC formally planted its flag in this space by launching the Artificial Intelligence and Algorithmic Fairness Initiative. This initiative sought to examine the use and impact of emerging technologies, including AI, in hiring and other employment decisions. The EEOC endeavored to assess how these technologies impact the processes for making employment decisions in order to provide insight to a broad range of constituents—employers, employees, job applicants, and vendors—on how to best navigate equal employment opportunity laws.
At the time the initiative was launched, EEOC Chair Charlotte Burrows declared:
Artificial intelligence and algorithmic decision-making tools have great potential to improve our lives, including in the area of employment. . . . At the same time, the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.
Those who have been following the EEOC’s progress the last few years would agree that this statement remains relevant.
EEOC Commissioner Keith Sonderling in particular has emerged as a leader in this area. He has authored numerous articles on the benefits and potential harms of using AI-based technology in the workplace and has met with stakeholders all over the world to present on this emerging technology. Without question, Commissioner Sonderling’s proactive efforts to bridge artificial technology and employment law laid the foundation for the future of EEOC investigations and cemented his legacy as an innovator in this space.
To effectuate the initiative, the EEOC assessed the rapid emergence of AI from a variety of angles. Specifically, the EEOC made clear that its initiative sought to (1) gather information about the adoption, design, and impact of hiring and other employment-related technologies; (2) identify best practices; and (3) listen to various stakeholders to best understand the ramifications of using AI. Equipped with this data and information, the EEOC then vowed to issue guidance and technical assistance materials to best guide key stakeholders on how to use AI within the confines of federal employment discrimination laws.
May 2022 TAD. On May 12, 2022, the EEOC and the U.S. Department of Justice, Civil Rights Division, each released new resources for employers and workers about the impact of AI, algorithmic fairness, and the Americans with Disabilities Act (ADA). The EEOC’s technical assistance document (TAD), The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, focuses on preventing discrimination against job applicants and employees with disabilities. The publication emphasizes three primary concerns under the ADA: (1) employers should have a process in place to provide reasonable accommodations when using algorithmic decision-making tools; (2) without proper safeguards, workers with disabilities may be “screened out” from consideration in a job or promotion even if they can do the job with or without a reasonable accommodation; and (3) if the use of AI or algorithms results in applicants or employees having to provide information about disabilities or medical conditions, it may result in prohibited disability-related inquiries or medical exams.
September 2022 roundtable. The EEOC continued the conversation about AI over the next few months. On September 13, 2022, Burrows and U.S. Department of Labor’s Office of Federal Contract Compliance Programs Director Jenny R. Yang hosted a virtual roundtable with external stakeholders to discuss the civil rights implications of the use of automated technology systems, including AI, in the recruitment and hiring of workers.
Draft SEP for 2023–2027. On January 10, 2023, the EEOC published a draft of its proposed strategic enforcement plan (SEP) for fiscal years 2023–2027. While the draft SEP was only released for public comment and is not yet final, its content suggests that a handful of subjects will be squarely on the EEOC’s radar for the next four years, including discrimination stemming from the use of AI in hiring.
While the EEOC’s focus on eliminating barriers in recruitment and hiring is not a new phenomenon, employers’ increasing use of AI in hiring has added a new wrinkle in this space. The SEP specifically notes that the EEOC will focus on the “use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.” Additionally, the commission will monitor screening tools or requirements that disproportionately impact workers based on their protected status, including those facilitated by AI or other automated systems, preemployment tests, and background checks. Finally, the SEP notes that the EEOC will keep an eye on restrictive application processes or systems, including online systems that are difficult for individuals with disabilities or other protected groups to access.
January 2023 public hearing. On January 31, 2023, the EEOC held a public hearing to examine the use of automated systems, including AI, in employment decisions. During the hearing, titled Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier, the EEOC gathered information from a broad spectrum of stakeholders, including computer scientists, civil rights advocates, legal experts, industrial-organizational psychologists, and employer representatives. Nearly 3,000 members of the public attended the hearing, signaling the import of this topic.
FY 2022 report. On March 13, 2023, the EEOC announced the release of its Fiscal Year 2022 Annual Performance Report (FY 2022 report). In addition to recapping the EEOC’s release of the aforementioned technical guidance materials, the FY 2022 report highlights that the EEOC was active on a variety of fronts. The EEOC hosted 24 AI and algorithmic fairness outreach events that reached nearly 1,200 attendees. The Commission conducted two training institute workshops “to educate employers about the risks associated with AI in the workplace.” And, perhaps most importantly for employers, the EEOC provided AI training to systemic enforcement teams in its field offices. The EEOC defines “systemic cases” as “pattern or practice, policy and/or [complex] cases where the discrimination has a broad impact on an industry, profession, company or geographic location.” Systemic cases, which may contain thousands of allegedly aggrieved individuals, are often high-stakes matters that can have profound impacts on a company’s viability or business model. Put differently, the iTutorGroup settlement represented a massive first wave, and future systemic discrimination lawsuits have the potential to be tsunamis.
These impressive results in the FY 2022 report foreshadowed the EEOC’s continued commitment to AI in 2023 and beyond.
EEOC’s May 2023 TAD
The momentum generated from the EEOC’s progress in 2022 was no mirage. On May 18, 2023, the EEOC released a TAD, Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures under Title VII of the Civil Rights Act of 1964, to provide employers guidance on preventing discrimination when utilizing AI. The TAD’s purpose is to inform employers on how to monitor the newer algorithmic decision-making tools and ensure compliance with Title VII.
Central terms. To set the parameters for the resource, the EEOC first defines a few key terms:
Software: Broadly, “software” refers to information technology programs or procedures that provide instructions to a computer on how to perform a given task or function. . . .
Algorithm: Generally, an “algorithm” is a set of instructions that can be followed by a computer to accomplish some end. . . .
Artificial Intelligence (“AI”): . . . In the employment context, using AI has typically meant that the developer relies partly on the computer’s own analysis of data to determine which criteria to use when making decisions. AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems.
Observing that employers increasingly utilize software that incorporates algorithmic decision-making at various stages of the employment process, the EEOC defines “algorithmic decision-making tool” broadly to refer to the following systems:
[i] resume scanners that prioritize applications using certain keywords; [ii] employee monitoring software that rates employees on the basis of their keystrokes or other factors; [iii] “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; [iv] video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and [v] testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test.
After summarizing the relevant provisions of Title VII, the core of the EEOC’s TAD is presented in a question-and-answer format.
Selection procedure and adverse impact. First, the EEOC defines a “selection procedure” to be
any “measure, combination of measures, or procedure” if it is used as a basis for an employment decision. . . . [E]mployers can assess whether a selection procedure has an adverse impact on a particular protected group by checking whether use of the procedure causes a selection rate for individuals in the group that is “substantially” less than the selection rate for individuals in another group.
If there is an adverse impact, then use of the tool will violate Title VII unless the employer can demonstrate that, pursuant to Title VII, such use is “job related and consistent with business necessity.”
Third-party tools and responsibility. Next, the EEOC presents the critical question whether an employer is “responsible under Title VII for its use of algorithmic decision-making tools even if the tools are designed or administered by another entity, such as a software vendor.” This is an important issue because many companies seek the assistance of third-party technology providers to facilitate some of their employment-decision processes. After all, most employers do not (perhaps yet) have an in-house team of AI engineers. The EEOC indicates that, in many cases, employers are responsible for the actions of their agents, such as third-party vendors. Ultimately, if the employer is making the final employment decision, the buck would likely stop with the employer in terms of Title VII liability. This is noteworthy as it makes clear that employers cannot simply bury their heads in the sand and point the finger at a vendor when AI software produces an unlawful outcome.
Selection rate and four-fifths rule. The EEOC also defines the term “selection rate,” which
refers to the proportion of applicants or candidates who are hired, promoted, or otherwise selected. The selection rate for a group of applicants or candidates is calculated by dividing the number of persons hired, promoted, or otherwise selected from the group by the total number of candidates in that group.
Due to the inclusion of this definition in the TAD, employers can expect the EEOC to monitor selection rates to determine whether there is an adverse impact in employment decisions stemming from the use of AI.
In terms of what is an acceptable selection rate, the EEOC relies on the “four-fifths rule,” which is “a general rule of thumb for determining whether the selection rate for one group is ‘substantially’ different than the selection rate of another group. The rule states that one rate is substantially different than another if their ratio is less than four-fifths (or 80%).” For example:
[If] the selection rate for Black applicants was 30% and the selection rate for White applicants was 60%[,] [t]he ratio of the two rates is thus 30/60 (or 50%). Because 30/60 (or 50%) is lower than 4/5 (or 80%), the four-fifths rule says that the selection rate for Black applicants is substantially different than the selection rate for White applicants . . . , which could be evidence of discrimination against Black applicants.
The EEOC notes that the four-fifths rule is a general suggestion and may not be appropriate in every circumstance. Some courts have even found this rule to be inapplicable. Nonetheless, employers would be prudent to ask whether AI vendors deployed the four-fifths rule in their algorithms. Statistics matter in this context. Thankfully, the EEOC has transparently provided the formulas for employers to follow, and the burden is now on the businesses to make sure that the selection rates are up to par.
Developing nondiscriminatory algorithms. Finally, the EEOC introduces the hypothetical issue of what employers should do when they discover that their use of an algorithmic decision-making tool would result in an adverse impact. The EEOC explains that “[o]ne advantage of algorithmic decision-making tools is that the process of developing the tool may itself produce a variety of comparably effective alternative algorithms.” Accordingly, employers who neglect to adopt a less discriminatory algorithm that could have been considered during the development process may potentially find themselves liable for the output. Employers should thus take heed to document the steps they take to utilize nondiscriminatory algorithms.
EEOC’s July 2023 ADA Guidance
The EEOC’s release of the May 2023 TAD was groundbreaking in terms of that resource’s depth and remarkable ability to anticipate, ask, and answer questions for pertinent stakeholders. Not resting on its laurels, the Commission continued publishing about AI in the summer. On July 26, 2023, the EEOC issued a new guidance entitled Visual Disabilities in the Workplace and the Americans with Disabilities Act (ADA Guidance). This document is an excellent resource for employers, providing insight into how to handle situations that may arise with job applicants and employees who have visual disabilities. For employers that use algorithms or AI as a decision-making tool, the ADA Guidance reinforces the notion that employers have an obligation to make reasonable accommodations for applicants or employees with visual disabilities who request them in connection with these technologies.
The ADA Guidance addresses four subjects:
- when an employer may ask an applicant or employee questions about a vision impairment and how an employer should treat voluntary disclosures;
- what types of reasonable accommodations applicants or employees with visual disabilities may need;
- how an employer should handle safety concerns about applicants and employees with visual disabilities; and
- how an employer can ensure that no employee is harassed because of a visual disability.
In the question-and-answer section of the ADA Guidance, the EEOC brings AI into the conversation by posing the following hypothetical question: “Does an employer have an obligation to make reasonable accommodations to applicants or employees with visual disabilities who request them in connection with the employer’s use of software that uses algorithms or artificial intelligence (AI) as decision-making tools?” According to the EEOC, the answer is yes.
The ADA Guidance opines that AI tools may intentionally (or perhaps unintentionally) “screen out” individuals with disabilities who apply for or are currently on the job even though they are able to do the job with or without reasonable accommodation. By way of example, “an applicant or employee may have a visual disability that reduces the accuracy of an AI assessment used to evaluate the applicant or employee.” In those situations, the EEOC notes, the employer is obligated “to provide a reasonable accommodation, such as an alternative testing format,that would provide a more accurate assessment of the applicant’s or employee’s ability to perform the position, absent undue hardship.”
In sum, while emerging technologies have the potential to benefit employees with disabilities in terms of potentially facilitating their ability to perform job functions, the other side of the coin is that employers must ensure that their use of AI does not preclude individuals with disabilities from having an opportunity to be part of the organization in the first place.
The EEOC Evolution: AI as a Strategic Priority
The use of AI in employment decisions may be the new frontier for future EEOC investigations—not only as a problem to address but also as a solution to deploy.
On September 21, 2023, the EEOC released its SEP for fiscal years 2024–2028. The SEP establishes the EEOC’s subject matter priorities to achieve its mission of preventing and remedying unlawful employment discrimination and to advance its vision of fair and inclusive workplaces with equal opportunity for all. Notably, the FY 2024–2028 SEP “[r]ecognizes employers’ increasing use of technology including artificial intelligence or machine learning, to target job advertisements, recruit applicants, and make or assist in hiring and other employment decisions, practices, or policies.” By including AI as a strategic priority, the EEOC reaffirmed that its commitment in this area will be a key element of the Commission’s enforcement efforts over the next several years.
Despite the fact that these technologies can have tremendous cost benefits, the risk is undeniable. However, not all EEOC litigation press releases are necessarily “gloom and doom” in terms of how AI can be deployed in the workplace. On March 20, 2023, the EEOC announced that it had entered into a conciliation agreement with a company that operates a job search website for technology professionals. The conciliation resolved national origin discrimination charges concerning allegations that some of the customers who posted positions on the job site excluded American candidates. Pursuant to the conciliation agreement, the company agreed to rewrite its programming to “scrape” for potentially discriminatory keywords such as “OPT,” “H1B,” or “Visa” that appear near the words “only” or “must” in its customers’ new job postings. The EEOC commended the use of AI as part of the solution, noting that “[w]e appreciate [the company’s] willingness to take steps to prevent future job postings on its site that discriminate against national origin, . . . [and the] use of programming to ‘scrape’ for potentially discriminatory postings illustrates a beneficial use of artificial intelligence in combatting employment discrimination.” This example illustrates that AI could actually be part of the solution to eradicating discrimination if it is deployed in the appropriate manner.
What’s Next?
If the EEOC’s zealous commitment to eradicating AI software bias has not been enough to get employers’ attention, several states and municipalities are already taking their own legislative action.
For instance, New York City’s automated employment decision tool (AEDT) law, passed by the New York City Council as Local Law Int. No. 1894-A, is already in effect. This law protects job candidates and employees from unlawful discriminatory bias based on race, ethnicity, or sex when employers and employment agencies use AEDTs to guide employment decisions. Under New York City’s AEDT law, it is unlawful for employers or staffing agencies to use an AEDT to screen candidates and employees unless (1) the tool has undergone a bias audit no more than one year prior to its use, (2) information about the bias audit is publicly available, and (3) certain notices have been provided to employees or job candidates.
At the state level, on February 17, 2023, the Illinois state legislature introduced HB 3773. If passed, this bill would restrict employers from using race, or zip code as a proxy for race, when making automated hiring decisions through emerging technologies such as AI.
In the next 10 years, it would not be surprising to see most states and major metropolitan areas enact some variation of an AI software discrimination statute. The anticipated state and local laws will likely not be identical, which in turn will create a compliance minefield for employers and AI software developers.
At the federal level, on October 30, 2023, President Biden signed an executive order that provides guidance for employers on the emerging utilization of AI in the workplace. The executive order endeavors to establish protections for American workers from unintended bias, discrimination, infringements on privacy, and other possible harms from AI. This development suggests that federal legislation regulating the use of AI in employment processes may not be far behind.
Takeaways
The purpose of this article is not to spook employers and other stakeholders in this conversation but, rather, to provide a road map of where the EEOC’s AI initiatives began and where they may go in the future. To best deter EEOC-initiated litigation involving AI in the hiring context, employers should review their AI software upon implementation to ensure that applicants are not excluded based on any protected class. Employers should also regularly audit the use of these programs to make sure that the AI software is not resulting in an adverse impact on applicants in protected-category groups. Finally, employers should continue to communicate with vendors to ensure that their policies are legally compliant. These communications should be regular and occur both before and during the contractual relationship.
Similar to the introduction of technologies such as the typewriter, computer, internet, and cell phone, there are, understandably, questions and resulting debates about the precise impact that AI will have on the business world, including the legal profession. To best adopt any new technology, one must first invest in understanding how it works. The EEOC has done exactly that over the last several years. The businesses that use AI software to make employment decisions must similarly make a commitment to fully understand its impact, particularly with regard to applicants and employees who are members of protected classes. The employment evolution is here, and those who are best equipped to understand the risks and rewards will thrive in this exciting new era.