Many may think of “artificial intelligence in the workplace” as a day when robots are roaming office hallways, replacing the jobs of coworkers. And while that sort of technological revolution may transpire in the not-so-distant future, the reality is that artificial intelligence (“AI”) has already infiltrated most workplaces in the United States.
But AI is not just being utilized to replace certain jobs; AI is making the decisions of who will hold those jobs. The Equal Employment Opportunities (EEOC) Chairwoman, Charlotte Burrows, said that more than 80% of employers are using AI in some form of their work and employment decision-making. And with the move towards remote hiring and work spawned by the COVID-19 Pandemic, federal, state, and local governments are racing to keep up with regulations to address AI’s proliferation in the workplace.
In a workplace setting predictive algorithms, often called “supervised machine learning,” a subfield of AI, use algorithmic tools to perform tasks like analyzing resumes, predicting job performance, or even performing facial analysis in interviews to evaluate a candidate’s stability, optimism or attention span.
While supervised machine learning is meant to streamline processes in the workplace and has been celebrated for its potential to enable fairer employment processes (avoiding biases or subjectivity that may be inherent in the human subconscious), it has also been criticized for its abilities to enable systemic discrimination and replicate human biases.
With supervised machine learning, programmers can set up algorithms based on historical data sets (called training data). From this historical data, the algorithm then learns a model that can be used to make predictions when presented with new data. For example, in the situation of creating an algorithm to identify future successful candidates for hire, a programmer may develop an algorithm based on resumes from past successful candidates. The algorithm could be trained to learn word patterns in the resumes (instead of skill sets) to determine an applicant’s suitability for a company. In theory, the algorithm should then be able to streamline a company’s hiring process by identifying individuals whose resumes have similar attributes to historically successful candidates (benchmark resumes), which would indicate that these new candidates might also be successful in the company.
But a problem could arise, for example, if the benchmark resumes used for the algorithm were derived from candidates of a predominant gender, age, national origin, race, or other group, and thus might exclude words that are commonly found in resumes of a minority group. Take, for instance, a situation where the benchmark resumes came from a predominantly Caucasian male group. The algorithm may downgrade terms that were not commonly included in the benchmark resume group, like “women” (such as in “women’s bowling league”), so that resumes containing those terms would score “lower” than resumes coming from individuals belonging to the same group as those in the benchmark resumes—a disparate impact.
While litigation surrounding allegations of disparate impact caused by use of algorithms has been most active with Fair Housing Act claims, with the increased use of AI in the workplace, employers should anticipate and prepare for these types of claims in the future.
While there is currently no federal law or regulations that specifically regulate AI in the workplace, in May 2022, the Equal Employment Opportunities Commission (EEOC) published guidance aimed to help U.S. employers navigate compliance with the Americans with Disabilities Act (ADA) while using AI in the workplace. The same day, the Department of Justice posted its own guidance regarding AI-related disability discrimination. Both sets of guidance outline potential ways AI and automated hiring tools could violate the ADA. Similarly, and more generally, at least sixteen states have introduced bills or resolutions relating to artificial intelligence in the workplace—all at different stages of the legislative process and paving a path for others.
But until such regulations are implemented, biased algorithms could potentially create a wide disparate impact on certain groups of candidates or employees, without an employer’s intention to discriminate. These issues are further compounded when an employer engages a third-party vendor to program the algorithms, as the vendor’s codes may be identified as trade secrets and thus not be made transparent to the affected employees, applicants, or even the employer.
Algorithms can be even more problematic as their processes may be confidential, and even if a company had intentions of being non-discriminatory in its hiring process, it may not have sufficient information to determine what data sets are being used for algorithms to ensure removal of prompts that could result in a disparate impact on protected individuals.
The lack of transparency in such technological advances has led many states and cities to introduce legislation aimed at combating the discriminatory impact of these tools in the workplace. Commonly, these regulations require that employers who use automated employment decision tools must ensure that the computational process has undergone a bias audit.