chevron-down Created with Sketch Beta.
June 07, 2024

As Employers Rely on AI in the Workplace, Legislators and Plaintiffs Push Back

Jonathan Ben-Asher

The workplace of 2024 would be unrecognizable to a visitor from 1990. Beyond widespread remote work and internet use, they would be flummoxed by how employers and employees use and interact with the algorithms of artificial intelligence.

As employers increasingly use AI to handle tasks currently done by human beings, the impact on global employment is likely to be huge, particularly because of the tremendous power of generative AI tools like ChatGPT. A 2023 Goldman Sachs study concluded that about two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute for up to one-fourth of current work. Globally, AI could expose the equivalent of 300 million full-time jobs to automation.

Many employers have leapt at the chance to use AI to recruit, screen, interview, and rate job candidates. A 2022 study by Eightfold AI found that almost three-quarters of U.S. companies were using AI to recruit and hire employees and manage their performance. And a Mercer survey found that in 2021 half of U.S. companies were using AI to determine if there were pay inequities by race or gender.

Employers argue that in these contexts, AI saves time and reduces costs, is better than humans at predicting performance and evaluating job candidates, and, unlike humans, is not subject to bias in decision making. Employee advocates argue that AI presents significant problems for job seekers and employees and can erect discriminatory barriers.

AI in Sourcing and Hiring

As a first step in recruitment, employers use AI to find candidates who might be in the job market. Ninety percent of recruiters look for candidates on LinkedIn, which has more than 50 million companies. Two-thirds of employers research potential candidates using social media, and a recent Apollo Technical survey found that more than half had disqualified a candidate because the employer disagreed with something in a candidate’s social media profile.

LinkedIn and Facebook enable companies advertising job openings to target specific audiences, using the advertiser’s criteria and the platform’s algorithms to decide who sees which ads. One study identified several ways that a platform can skew an ad so that it potentially has a discriminatory impact. Auditing for Discrimination in Algorithms Delivering Job Ads, International World Wide Web Conference Committee, 2021, https://ant.isi.edu/datasets/addelivery/Discrimination-Job-Ad-Delivery.pdf

Employers also use AI to evaluate, accept and reject job candidates based on their use of social media. The recruitment tools sold by one vendor, Signal-Hire, displays candidates’ postings and judges if the applicant would be likely to move to another company.

Employers can go deeper by using AI to review candidates’ social media activity for possibly troublesome signs. One algorithm offered by Icon Consultants can quickly scan social media, score a candidate on their tendencies toward certain behaviors, and analyze “the tendency for the post to promote violence, racism, sexism, and bullying, just to name a few.” Another AI developer, Humantic, says its tool creates candidates’ personality profiles, ranking them on various personality traits, and gives advice on how to deal with particular applicants. Companies that sell AI evaluation tools such as Cappfinity and Pymetrics say that their algorithms can fairly and reliably evaluate applicants.

Job seekers who make it past an initial AI screening can find that their job interview will be conducted by an algorithm. Applicants respond to questions which appear in text on the screen or are posed by a digital voice. Two MIT studies of interview platforms from two vendors found gross inaccuracies in the responses they reported to employers and their predictions about the candidate. For example, when one MIT tester responded to an interview question by reading a Wikipedia entry in German, the algorithm rated her highly in the English language skills essential to the position. And HireVue’s AI interview tool rates candidates on job competencies, including “soft” competencies like communication skills, conscientiousness, problem-solving skills, team orientation, and initiative.

State Legislative Responses to the Use of AI in Employment Decisions State lawmakers have responded to the rise of AI in employment decisions with legislation requiring companies to give candidates and employees prior notice of the use of an AI tool and allowing employees to opt out of the tool.

The National Conference of State Legislatures tracks AI-related legislative efforts. Approaches to Regulating Artificial Intelligence: A Primer (ncsl.org) (updated as of January 12, 2014). In the 2023 legislative session, 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills, and eighteen states and Puerto Rico passed AI-related legislation or resolutions. Legislation is pending in many states, including California, Illinois, Maine, Massachusetts, New Jersey, New York (with many bills), Rhode Island and Washington, and the District of Columbia.

A 2023 New York City law requires that employers notify applicants and employees who “reside in” New York City that an Automated Employment Decision Tool (AEDT) will be used in the employer’s decision and what job qualifications and characteristics the tool will use; the individual can request another method of doing the evaluation. The law also requires employers to conduct regular bias audits of their AI employment tools and include the results on their websites. Illinois’ Artificial Intelligence Video Interview Act requires employers who want to use an AI-based recorded job interview for an Illinois-based job to explain to the applicant how the tool will be used and get the applicant’s consent. At the applicant’s request, the employer must delete the interview. Maryland also requires a job applicant’s consent for the use of AI facial analysis software in an interview. California’s Privacy Rights Act establishes the Privacy Protection Agency and directs it to issue regulations governing access and opt-out rights concerning businesses’ use of automated decision-making technology, including profiling. Cal. Civ. Code §1798.185(a)(16).

The Federal Response

The EEOC has been highly active in this area. Its AI initiative seeks to ensure that AI tools used in employment decisions comply with federal civil rights laws. The EEOC has issued technical assistance guidance on AI in employment decision making concerning both Title VII and the ADA. In 2022, the White House issued its Blueprint for an AI Bill of Rights. It is based on five principles: 1. AI systems should be safe and effective; 2. AI systems shouldn’t contribute to discrimination, and should be independently audited; 3. Employers should provide protections for data privacy; 4. People impacted by AI should receive plain language notices explaining how it will be used; and 5. People should be able to opt out of the use of AI tools.

Employment Litigation

Plaintiffs have brought cases in several jurisdictions claiming that employers’ use of AI is discriminatory or otherwise violates state laws.

In some cases, vendors that sell AI tools to employers may be liable under anti-discrimination statutes. In an ongoing class action claiming that an AI tool unlawfully screens out black job applicants, the California Supreme Court ruled that the vendor which developed the tool could be liable as the company’s agent under the California Fair Employment and Housing Act. Raines v. U.S. Healthworks Medical Group, 15 Cal. 5th 268 (August 21, 2023); see also 2023 U.S. App. LEXIS 27666 (9th Cir. October 18, 2023). In another California class action against an AI provider, plaintiffs claim that the vendor functions as an employment agency and uses AI tools that incorporate human bias to screen out applicants by age and race. Mobley v. Workday Inc., 23-cv-00770 (N.D.CA 2023). The court heard argument on defendant’s motion to dismiss in January 2024.

In Illinois, a class action against a vendor of AI tools alleges that the company uses a facial recognition tool in job interviews without getting informed consent as required by the Illinois Biometric Privacy Protection Act. Deyerler v. HireVue Inc., Case No. 2022CH00719 (Cook Cty. Cir. Ct. 2022).

A class action in Massachusetts alleges that a tech company selling AI interview tools violates the Massachusetts law prohibiting the use of lie detector tests by employees. Defendant has moved to dismiss the complaint. Baker v. CVS Health Corporation, 23-cv-11483.

Last September, the EEOC settled a case it had brought against a tutoring company, which alleged that the firm’s AI tool for job applicants automatically rejected older qualified candidates because of their age. The explosion in the workplace use of AI tools will continue to be met with legislation to regulate it and litigation to challenge it. In the meantime, employers should be cautious, and employees should be alert.

Jonathan Ben-Asher

Ritz Clark & Ben-Asher

Jonathan Ben-Asher is a partner at Ritz Clark & Ben-Asher in New York City, where he represents employees.

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.