Employers also use AI to evaluate, accept and reject job candidates based on their use of social media. The recruitment tools sold by one vendor, Signal-Hire, displays candidates’ postings and judges if the applicant would be likely to move to another company.
Employers can go deeper by using AI to review candidates’ social media activity for possibly troublesome signs. One algorithm offered by Icon Consultants can quickly scan social media, score a candidate on their tendencies toward certain behaviors, and analyze “the tendency for the post to promote violence, racism, sexism, and bullying, just to name a few.” Another AI developer, Humantic, says its tool creates candidates’ personality profiles, ranking them on various personality traits, and gives advice on how to deal with particular applicants. Companies that sell AI evaluation tools such as Cappfinity and Pymetrics say that their algorithms can fairly and reliably evaluate applicants.
Job seekers who make it past an initial AI screening can find that their job interview will be conducted by an algorithm. Applicants respond to questions which appear in text on the screen or are posed by a digital voice. Two MIT studies of interview platforms from two vendors found gross inaccuracies in the responses they reported to employers and their predictions about the candidate. For example, when one MIT tester responded to an interview question by reading a Wikipedia entry in German, the algorithm rated her highly in the English language skills essential to the position. And HireVue’s AI interview tool rates candidates on job competencies, including “soft” competencies like communication skills, conscientiousness, problem-solving skills, team orientation, and initiative.
State Legislative Responses to the Use of AI in Employment Decisions State lawmakers have responded to the rise of AI in employment decisions with legislation requiring companies to give candidates and employees prior notice of the use of an AI tool and allowing employees to opt out of the tool.
The National Conference of State Legislatures tracks AI-related legislative efforts. Approaches to Regulating Artificial Intelligence: A Primer (ncsl.org) (updated as of January 12, 2014). In the 2023 legislative session, 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills, and eighteen states and Puerto Rico passed AI-related legislation or resolutions. Legislation is pending in many states, including California, Illinois, Maine, Massachusetts, New Jersey, New York (with many bills), Rhode Island and Washington, and the District of Columbia.
A 2023 New York City law requires that employers notify applicants and employees who “reside in” New York City that an Automated Employment Decision Tool (AEDT) will be used in the employer’s decision and what job qualifications and characteristics the tool will use; the individual can request another method of doing the evaluation. The law also requires employers to conduct regular bias audits of their AI employment tools and include the results on their websites. Illinois’ Artificial Intelligence Video Interview Act requires employers who want to use an AI-based recorded job interview for an Illinois-based job to explain to the applicant how the tool will be used and get the applicant’s consent. At the applicant’s request, the employer must delete the interview. Maryland also requires a job applicant’s consent for the use of AI facial analysis software in an interview. California’s Privacy Rights Act establishes the Privacy Protection Agency and directs it to issue regulations governing access and opt-out rights concerning businesses’ use of automated decision-making technology, including profiling. Cal. Civ. Code §1798.185(a)(16).
The Federal Response
The EEOC has been highly active in this area. Its AI initiative seeks to ensure that AI tools used in employment decisions comply with federal civil rights laws. The EEOC has issued technical assistance guidance on AI in employment decision making concerning both Title VII and the ADA. In 2022, the White House issued its Blueprint for an AI Bill of Rights. It is based on five principles: 1. AI systems should be safe and effective; 2. AI systems shouldn’t contribute to discrimination, and should be independently audited; 3. Employers should provide protections for data privacy; 4. People impacted by AI should receive plain language notices explaining how it will be used; and 5. People should be able to opt out of the use of AI tools.
Employment Litigation
Plaintiffs have brought cases in several jurisdictions claiming that employers’ use of AI is discriminatory or otherwise violates state laws.
In some cases, vendors that sell AI tools to employers may be liable under anti-discrimination statutes. In an ongoing class action claiming that an AI tool unlawfully screens out black job applicants, the California Supreme Court ruled that the vendor which developed the tool could be liable as the company’s agent under the California Fair Employment and Housing Act. Raines v. U.S. Healthworks Medical Group, 15 Cal. 5th 268 (August 21, 2023); see also 2023 U.S. App. LEXIS 27666 (9th Cir. October 18, 2023). In another California class action against an AI provider, plaintiffs claim that the vendor functions as an employment agency and uses AI tools that incorporate human bias to screen out applicants by age and race. Mobley v. Workday Inc., 23-cv-00770 (N.D.CA 2023). The court heard argument on defendant’s motion to dismiss in January 2024.
In Illinois, a class action against a vendor of AI tools alleges that the company uses a facial recognition tool in job interviews without getting informed consent as required by the Illinois Biometric Privacy Protection Act. Deyerler v. HireVue Inc., Case No. 2022CH00719 (Cook Cty. Cir. Ct. 2022).
A class action in Massachusetts alleges that a tech company selling AI interview tools violates the Massachusetts law prohibiting the use of lie detector tests by employees. Defendant has moved to dismiss the complaint. Baker v. CVS Health Corporation, 23-cv-11483.
Last September, the EEOC settled a case it had brought against a tutoring company, which alleged that the firm’s AI tool for job applicants automatically rejected older qualified candidates because of their age. The explosion in the workplace use of AI tools will continue to be met with legislation to regulate it and litigation to challenge it. In the meantime, employers should be cautious, and employees should be alert.