chevron-down Created with Sketch Beta.

Discrimination in the Age of Artificial Intelligence

BY AMBER M. ROGERS AND MICHAEL REED

In his 1968 classic film, 2001: A Space Odyssey, Stanley Kubrick introduced us to—and terrified us with—HAL, the sentient computer that sabotaged its human operators after learning of their plans to disconnect it. HAL, it seemed, had outsmarted its human creators to protect itself and bring about their demise.

Artificial intelligence (“AI”) has been a fixture in the collective American mind for decades. Although it has not progressed to the terrifying extremes depicted in 2001 (at least not yet), AI now plays a role in almost every industry in the world. Automobiles now come standard with a bevy of automated features and the prospect of self-driving cars seems imminent. AI has infiltrated the legal industry too. Lawyers pride themselves on their intellect and ability to apply critical analysis to constantly changing circumstances. Perhaps that will remain a uniquely human ability for now, but AI is now being used to analyze case law and even assemble legal briefs.

Another new frontier for AI is the corporate hiring process. Companies are now using AI not only to scan resumes for relevant experience or accomplishments, but even social media pages for potential applicants who the company can target directly with an automated solicitation or application. Proponents of this use of AI say that it allows employers to process a far greater number of resumes or applications than they otherwise could with only humans at the helm. It also would seem to eliminate the risk of human biases based on names, age, race, or gender, by removing that information from any algorithm on which the AI is based. Moreover, an AI-powered screening process does not get tired at 3:00 p.m. after reviewing reams of resumes or applications, like humans do, which has been shown to correlate with rejected applications. The ability of AI to process a seemingly endless amount of data would lead one to think that it increases the opportunity for applicants and employers to find each other. The AI skeptics say that it is only as good as the humans who create the underlying algorithms and that it detracts the essential “human” element from an especially interpersonal process. There are also technical problems with AI, as there is with any new technology, that skeptics say make it unsuitable for certain uses. Facial and voice recognition AI, for example, while rapidly improving, is still far from perfect and makes errors on a regular basis.

There are also particular risks for employers under federal and state employment statutes. Title VII of the Civil Rights Act of 1964 (“Title VII”) provides federal protections for employees and applicants against discrimination on the basis of certain characteristics, including race, religion and gender, and the Age Discrimination in Employment Act (“ADEA”) prohibits age discrimination. An employer relying on AI to sort through applications could inadvertently disqualify an applicant or group of applicants based on a protected trait. For example, an AI screening application that disqualifies applicants outside of a certain geographic radius might inadvertently discriminate against a particular racial or ethnic group.

Likewise, a screening application that discards applicants who lack certain educational credentials might inadvertently discriminate against older applicants. If AI is throwing out an entire class of applicants, even if based on seemingly innocuous factors such as geography, the employer could be held liable for disparate impact discrimination, especially if the plaintiffs are able to show that a less discriminatory practice would have accomplished the employer’s hiring initiatives.

In addition, states are increasingly passing their own anti-discrimination laws that include other types of employees. Virginia, for example, recently passed a law that protects employees who use cannabis oil for medical purposes. This law distinguishes “cannabis oil” from other types of medicinal marijuana and has specific definitions of what is and is not protected. An algorithm that fails to take these nuances into consideration might inadvertently discriminate against protected cannabis users. These challenges are all the more difficult for large employers who receive a broad
array of applicants and must comply with not only the federal anti-discrimination laws, but also those of each state in which the company hires people.

There are currently no federal regulations governing the use of AI in employment. Municipalities and state legislatures, however, recently began taking steps directed toward preventing AI-induced bias. New York City, for example, is currently debating a measure that would regulate the use of “automated employment decision tools,” which include “certain systems that use algorithmic methodologies to filter candidates for hire or to make decisions regarding any other term, condition or privilege of employment.”

Illinois recently enacted the Artificial Intelligence Video Interview Act. Under the Act, effective January 2020, employers are required to notify applicants in writing and obtain their consent if AI may be used to analyze facial expressions during a job interview. Employers must also provide applicants with detailed information about the AI application and how it will be used to evaluate them. A 2021 proposed amendment to the law would, if passed, require employers that rely solely upon artificial intelligence in deciding whether to interview an applicant to report certain demographic information to the state. Several other states, including Massachusetts, New Jersey and Vermont, to name a few, have pending laws that address AI discrimination
in some capacity.

As AI becomes more common in the workplace and especially in the hiring process, more cities and states (and possibly the federal government) will enact regulations governing its use. Employers who use AI in the hiring process should be mindful of this and should also be mindful of the algorithms underlying their AI applications. At a minimum, employers should see that these algorithms and applications are audited or monitored to prevent the inadvertent disqualification of a particular protected group of applicants. AI offers many benefits for employers, but the technology is new and constantly changing, as are the laws that govern it.

 

Amber M. Rogers

Hunton Andrews Kurth LLP

Amber M. Rogers is a partner in the Dallas office of Hunton Andrews Kurth LLP.

Michael Reed

Hunton Andrews Kurth LLP

Michael Reed is an associate in the Houston office of Hunton Andrews Kurth LLP.