Use of A.I. Creates Potential Risks Under Existing Employment Laws
Like any other recruiting or hiring practice, the use of A.I. systems to screen and interview candidates implicates Title VII of the Civil Rights Act of 1964 (“Title VII”), a federal law that protects employees and applicants against discrimination based on certain specified characteristics such as race, color, national origin, sex, and religion, as well as the Age Discrimination in Employment Act (“ADEA”). Both Title VII and the ADEA prohibit discrimination based on disparate treatment and/or disparate impact. While a claim of disparate treatment—i.e., intentional discrimination—might seem odd when talking about use of a computer program that by its nature necessarily lacks a discriminatory motive or intent, courts have upheld claims of disparate treatment based on allegations of unconscious or implicit bias. As discussed above, unconscious bias can manifest in an A.I. system because of its programming and training. Thus, a court could find that an employer faces the same liability for a program exhibiting the unconscious bias of its programmer as it would if the programmer had made the hiring decision him or herself, based on that bias.
Alternatively, an employer could face a Title VII or ADEA disparate impact claim if use of a particular A.I.-driven program or algorithm adversely impacts members of a protected class, such as the female applicants who were being disfavored by Amazon’s recruiting tool. Courts analyzing such a claim could turn to a seminal line of cases that dealt with employers’ use of standardized tests in the application and promotion process. In its opinions in Griggs v. Duke Power Company and Albemarle Paper Co. v. Moody, the Supreme Court established that if such tests are shown to have a disparate impact on protected groups of employees, employers must establish that the tests are both job-related and represent a reasonable measure of job performance. Courts could apply the same reasoning to A.I. programs and algorithms, whereby employers may be forced to establish how the factors considered by the programs relate to the specific job requirements for the position at issue. In some cases, such as analysis of relevant experience in a resume, an employer might be able to make such a showing easily. In cases where facial recognition software is prioritizing candidates who made eye contact during an automated interview, job-relatedness might be more difficult to establish. In addition, even if an employer shows that the A.I. tool is considering job-related factors, applicants could still succeed on a disparate impact claim by pointing to the existence of a less discriminatory practice that could serve the same job-related business interest.
An A.I.-hiring practice could also implicate the Americans with Disabilities Act (“ADA”) if an algorithm discerns an applicant’s physical disability, mental health, or clinical diagnosis, all of which are forbidden inquiries in pre-employment candidate assessments. The ADA Amendments Act of 2008 broadened the statutory definition of “disability,” increasing the scope of individuals whom the ADA protects. Similarly, the Equal Employment Opportunity Commission (“EEOC”) has issued guidance qualifying the expanded list of personality disorders identified in the psychiatric literature as protected mental impairments. Consequently, the ADA may protect applicants who have significant concentration or communication problems, both of which A.I.-technology may identify as a disqualifying characteristic for employment.
The potential for A.I. recruiting practices to violate existing employment statutes is not hypothetical. In fact, the EEOC has already investigated at least two instances of alleged A.I. bias, and has made clear that employers using A.I. hiring practices could face liability for any unintended discrimination. Furthermore, in September 2018, three U.S. Senators requested that the EEOC develop guidelines for employers’ use of facial analysis technologies to ensure they do not violate anti-discrimination laws. Though the EEOC has not yet responded to the Senators’ request, the Commission’s recent enforcement activities demonstrate its focus on the growing use of new technologies. For example, the EEOC, in 2017, found reasonable cause to believe an employer violated the ADEA by advertising on Facebook for a position within its company and “limiting the audience for their advertisement to younger applicants.”
In addition to laws focusing on discrimination, the use of certain A.I. recruiting tools could implicate state biometric laws. Illinois, Texas, and Washington have laws regulating the collection of biometric identifiers including scans of hands, fingers, voices, faces, irises, and retinas. The laws generally require that businesses collecting biometric identifiers specify how they safeguard, handle, store, and destroy the data they collect, and provide individuals with prior notice and consent, including notice of how exactly the data will be collected and used. Furthermore, New York, California, Washington, and Arkansas have recently amended their existing state laws to include biometric data in the definition of protected personal information. To the extent that employers use facial or voice recognition software to analyze applicants’ video interviews, they may have to develop policies to ensure that their storage and use of that data complies with applicable state laws. Furthermore, the nature of an online application process means that employers may end up inadvertently collecting biometric data from individuals who reside outside of the states in which the company normally operates, which could expose the employer to additional legal requirements of which it might not be aware.
Many States Are Now Focused on Protecting Job Applicants Regarding the Use of A.I. in Hiring
While A.I. in recruiting is not regulated on a federal level, Illinois recently enacted a first-of-its-kind law called the Artificial Intelligence Video Interview Act. Effective January 1, 2020, the law imposes strict limitations on employers who use A.I. to analyze candidate video interviews. Under the Act, employers must: a) notify applicants that A.I. will be used in their video interviews; b) obtain consent to use A.I. in each candidate’s evaluation; c) explain to applicants how the A.I. works and what characteristics the A.I. will track in relation to their fitness for the position; d) limit sharing of the video interview to those who have the requisite expertise to evaluate the candidate; and e) comply with an applicant’s request to destroy his or her video within 30 days.
New York currently is considering legislation to limit the discriminatory use of A.I. technology. If passed, the new law would prohibit the sale of “automated employment decision tools” unless the tools’ developers first conducted anti-bias audits to assess the tools’ predicted compliance with the provisions of Section 8-107 of the New York City Code, which sets forth the city’s employment discrimination laws, and prohibits, among other things, employment practices that disparately impact protected applicants or workers. New Jersey and Washington state legislators introduced similar legislation in 2019.
Furthermore, beginning in 2018, New York, Vermont, and Alabama created task forces to begin studying the development and use of A.I. technologies. The states directed the task forces to assess the A.I.-tools for various benchmarks like discriminatory impact, fairness, accountability, and transparency, and to develop best practices for A.I. usage. These efforts to examine A.I. tools in depth could foreshadow upcoming state regulation of A.I.-driven pre-employment tools.
States legislatures are not the only ones scrutinizing A.I. usage in recruiting. Senate and House Congressional legislators introduced the Algorithmic Accountability Act (“AAA”) in April 2019. The proposed AAA is the first federal law aimed at regulating the use of algorithms by private companies, and would task the Federal Trade Commission with creating regulations that require major employers to assess their A.I. tools for accuracy, fairness, bias, discrimination, privacy, and security and to implement timely corrections. As drafted, the AAA only applies to companies with revenues in excess of $50 million per year, that possess information relating to at least one million people or devices, or that act as data brokers who buy and sell consumer data. Commentators have stated that the proposed act provides clear notice that Congress believes A.I. should be regulated, and will step in.
What Employers Should Be Aware of When Considering Using A.I. in Hiring
Just as COVID-19 has accelerated the transition of many employers to flexible work schedules, the nationwide move to more regular work-from-home arrangements is likely to accelerate the adoption of A.I. tools in the recruiting, interviewing, and hiring process. To the extent that employers are considering using such tools, either in-house or through a recruiting company, there are certain issues of which they should be cognizant:
- Employers should know the factors being considered by the program or algorithm. In much the same way that employers carefully develop and identify non-discriminatory and non-biased factors and considerations that are important to their traditional hiring decisions, they need to be equally as diligent in developing and modifying (where appropriate) the inputs that are fed into their recruiting programs and algorithms used to screen and evaluate potential candidates and applicants. Not only will this enhance the likelihood of recruiting success, but it will give employers the opportunity to assess whether the factors are, in fact, job-related, which is a lynchpin criterion under many employment laws.
- Employers should consider auditing automated tools on a regular basis. One of the main selling points for machine learning tools is that they can adapt on their own to feedback from the person making employment decisions, theoretically leading to better results the longer they are used. The downside of this constant adaptation is that employers cannot rely on an initial analysis of whether the program is returning results that may disadvantage one group or another. Employers should consider regularly auditing the results produced by these tools to ensure that the programs are not inadvertently “learning” illegal or improper lessons from the information that is input. Self-critical analysis of both the inputs and outputs is essential to minimize liability risk under the employment laws.
- Outsourcing does not eliminate risk to employers. Not all employers have the capability of internally developing A.I. tools for recruiting—many likely contract with outside vendors to handle parts of the recruiting process, particularly the initial vetting of applicants and/or the advertising to specific potential candidates. Using such an arrangement, however, does not exempt the employer from liability if the vendor is using tools that discriminate against protected groups. Similar to requests for salary history and background checks, employers may be held liable for violations of employment laws by recruiting companies. As such, employers—through appropriate contract language—should require their recruiters, or others acting on their behalf, to comply with all existing employment laws in connection with the screening and hiring of job applicants.