chevron-down Created with Sketch Beta.
June 10, 2022

Artificial Intelligence in the Workplace

The Future is Now

Lindsey Wagner

Many may think of “artificial intelligence in the workplace” as a day when robots are roaming office hallways, replacing the jobs of coworkers. And while that sort of technological revolution may transpire in the not-so-distant future, the reality is that artificial intelligence (“AI”) has already infiltrated most workplaces in the United States.

But AI is not just being utilized to replace certain jobs; AI is making the decisions of who will hold those jobs. The Equal Employment Opportunities (EEOC) Chairwoman, Charlotte Burrows, said that more than 80% of employers are using AI in some form of their work and employment decision-making. And with the move towards remote hiring and work spawned by the COVID-19 Pandemic, federal, state, and local governments are racing to keep up with regulations to address AI’s proliferation in the workplace.

In a workplace setting predictive algorithms, often called “supervised machine learning,” a subfield of AI, use algorithmic tools to perform tasks like analyzing resumes, predicting job performance, or even performing facial analysis in interviews to evaluate a candidate’s stability, optimism or attention span.

While supervised machine learning is meant to streamline processes in the workplace and has been celebrated for its potential to enable fairer employment processes (avoiding biases or subjectivity that may be inherent in the human subconscious), it has also been criticized for its abilities to enable systemic discrimination and replicate human biases.

With supervised machine learning, programmers can set up algorithms based on historical data sets (called training data). From this historical data, the algorithm then learns a model that can be used to make predictions when presented with new data. For example, in the situation of creating an algorithm to identify future successful candidates for hire, a programmer may develop an algorithm based on resumes from past successful candidates. The algorithm could be trained to learn word patterns in the resumes (instead of skill sets) to determine an applicant’s suitability for a company. In theory, the algorithm should then be able to streamline a company’s hiring process by identifying individuals whose resumes have similar attributes to historically successful candidates (benchmark resumes), which would indicate that these new candidates might also be successful in the company.

But a problem could arise, for example, if the benchmark resumes used for the algorithm were derived from candidates of a predominant gender, age, national origin, race, or other group, and thus might exclude words that are commonly found in resumes of a minority group. Take, for instance, a situation where the benchmark resumes came from a predominantly Caucasian male group. The algorithm may downgrade terms that were not commonly included in the benchmark resume group, like “women” (such as in “women’s bowling league”), so that resumes containing those terms would score “lower” than resumes coming from individuals belonging to the same group as those in the benchmark resumes—a disparate impact.

While litigation surrounding allegations of disparate impact caused by use of algorithms has been most active with Fair Housing Act claims, with the increased use of AI in the workplace, employers should anticipate and prepare for these types of claims in the future.

While there is currently no federal law or regulations that specifically regulate AI in the workplace, in May 2022, the Equal Employment Opportunities Commission (EEOC) published guidance aimed to help U.S. employers navigate compliance with the Americans with Disabilities Act (ADA) while using AI in the workplace. The same day, the Department of Justice posted its own guidance regarding AI-related disability discrimination. Both sets of guidance outline potential ways AI and automated hiring tools could violate the ADA. Similarly, and more generally, at least sixteen states have introduced bills or resolutions relating to artificial intelligence in the workplace—all at different stages of the legislative process and paving a path for others.

But until such regulations are implemented, biased algorithms could potentially create a wide disparate impact on certain groups of candidates or employees, without an employer’s intention to discriminate. These issues are further compounded when an employer engages a third-party vendor to program the algorithms, as the vendor’s codes may be identified as trade secrets and thus not be made transparent to the affected employees, applicants, or even the employer.

Algorithms can be even more problematic as their processes may be confidential, and even if a company had intentions of being non-discriminatory in its hiring process, it may not have sufficient information to determine what data sets are being used for algorithms to ensure removal of prompts that could result in a disparate impact on protected individuals.

The lack of transparency in such technological advances has led many states and cities to introduce legislation aimed at combating the discriminatory impact of these tools in the workplace. Commonly, these regulations require that employers who use automated employment decision tools must ensure that the computational process has undergone a bias audit.

What is a Bias Audit?

Generally, in the context of machine learning or artificial intelligence tools, a bias audit is an impartial evaluation that tests the tool’s disparate impact upon protected classes (such as race, ethnicity, sex, or disability).

In 2019, Illinois led the way with one of the country’s first AI workplace laws, the Artificial Intelligence Video Interview Act, which applies to all employers that use an AI tool to analyze video interviews of applicants for positions based in Illinois. The law requires that employers make certain disclosures and obtain consent from applicants if the employers are using artificial intelligence enabled video interview technology during the hiring process. The law further requires that employers that rely solely on AI to make certain interview decisions maintain records of demographic data, including the applicants’ race and ethnicity. Employers must submit that data on an annual basis to the state, which must conduct an analysis to determine if there was racial bias in the use of the AI. Maryland followed with a similar law in 2020, restricting employers’ use of facial recognition services during preemployment interviews until an employer receives consent from the applicant.

On the heels of the Illinois and Maryland AI laws, the New York City Council introduced legislation that will become effective on January 1, 2023, and is specifically focused on regulating AI associated with typical Human Resources technology, including employers’ use of “automated employment decision tools,” unless the tool has undergone a bias audit. Before such a tool is used to screen a candidate or employee for an employment decision, the employer must first notify the individual that the tool will be used, identify the job qualifications and characteristics that the tool will use in its assessment, and make publicly available on its website a summary of the bias audit and the distribution date of the tool. The candidate or employee has the right to request an alternative selection process or accommodation upon notification of use of the tool.

While forward-looking, these first AI laws have not been without criticisms. Concerns include that these initial laws fail to include a private right of action, may lack clear definitions of what technology is considered “artificial intelligence,” and may exclude members of protected groups (for example, the NYC law requires an audit only for discrimination on the basis of race or gender).

This is why some have heralded the pending legislation in the District of Columbia, the Stop Discrimination by Algorithms Act, which goes a step further by allowing a private right of action for individual plaintiffs, including potential punitive damages and attorney’s fees. The D.C. legislation, if passed, would prohibit covered entities from making an algorithmic eligibility determination on the basis of an individual’s or class of individuals’ actual or perceived race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income, or disability in a manner that segregates, discriminates against, or otherwise makes important employment opportunities unavailable to an individual or class of individuals.

In Spring 2022, the California Fair Employment and Housing Council proposed revisions to the state’s non-discrimination laws with regard to employers or agencies that use or sell services with AI, supervised machine learning or automated decision making system (ADS). The proposed regulations would expand employers’ record keeping requirements by requiring them to include machine-learning data as part of their records, and for employers or agencies using ADS, to retain records of assessment criteria used by the ADS. At publication, the regulations are in the pre-rule making phase.

It is not only state and local governments that have set out to address AI in the workplace.

In May 2021, Senator Edward J. Markey (D-Mass) and Congresswoman Doris Matsui (CA-06), introduced the Algorithmic Justice and Online Platform Transparency bill, which proposes a cross-government investigation into discriminatory algorithmic processes throughout the economy. The legislation goes beyond just addressing AI in the workplace and seeks to prohibit “algorithmic processes on online platforms that discriminate on the basis of race, age, gender, ability and other protected characteristics,” among other protections. At publication, the bill is pending in committee.

Federal agencies have also taken aim at addressing AI in the workplace. Following a December 2020 joint letter from ten U.S. senators to the Equal Employment Opportunities Commission (EEOC) calling on the EEOC to use its powers under Title VII of the Civil Rights Act of 1964 to “investigate and/or enforce against discrimination related to the use of” AI hiring technologies, the EEOC announced an initiative on AI in October 2021. As part of the initiative, the EEOC pledged to ensure artificial intelligence and other emerging technology tools used for employment decisions are compliant with federal civil rights laws, resulting in the agency issuing its first technical assistance document in May 2022, and emphasizing that the bias does not need to be intentional to be illegal. on AI in the workplace and implications on ADA in May 2022.

Where Do We Go From Here? The Practical Considerations

With the EEOC and DOJ’s recent introduction of AI workplace guidance, employers now have guideposts to use and ensure that their workplace practices are consistent with these federal policies.

For employers not operating in a capacity or in a location already affected by enacted AI legislation, they can still look ahead to the future and minimize exposure to litigation by ensuring that any algorithmic tools being utilized in the workplace, including those from a third party, can successfully pass a bias audit by an independent auditor. As more states and local governments continue to propose and enact legislation governing the use of AI in the workplace, and federal regulations develop, employers should continue to stay apprised of these developments that may impact the legality of algorithmic tools in the workplace in the future.

Lindsey Wagner

Co-Chair, ABA Section of Labor and Employment Law Membership Development and Engagement Committee

Lindsey Wagner is a licensed attorney, mediator and workplace investigator in Florida, Ohio and California with Moxie Mediation. She is a Co-Chair of the ABA Section of Labor and Employment Law’s Membership Development and Engagement Committee.

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.