chevron-down Created with Sketch Beta.

Human Rights Magazine

Magazine Archives

Hiring Discrimination by Algorithm: A New Frontier for Civil Rights and Labor Law

Lydia X. Z. Brown

Summary

  • Industry leaders claim that automated hiring tools increase equity but fail to acknowledge that technology does not exist apart from the social, cultural, and political context in which it is created as warned by advocates for technology justice and scholars of science.
  • Disability and age-based discrimination in employment share common patterns.
  • The Equal Employment Opportunity Commission named discriminatory hiring tech as an enforcement priority in its Strategic Enforcement Plan for 2023-2027.
  • The Civil Rights Standards for 21st Century Employment Selection Procedures offer specific protocols that employers and developers may consider implementing to help prevent discriminatory effects of their automated hiring tools.
Hiring Discrimination by Algorithm: A New Frontier for Civil Rights and Labor Law
Maskot via Getty Images

Jump to:

In 2012, Kyle Behm applied for a job at a Kroger supermarket store that required him to take a personality test. He answered the questions truthfully without thinking too much about it, and he was rejected from one job after another. After his father, corporate attorney and former in-house counsel Roland Behm, helped Behm take a screen capture of questions in one of the tests, they discovered that the company was using questions taken directly from a psychological screening instrument. Kyle Behm had been diagnosed with bipolar disorder. When a person without bipolar disorder completed the personality test with truthful answers, their application was advanced. Behm and his father filed several Equal Employment Opportunity Commission (EEOC) complaints bringing charges of unlawful discrimination on the basis of mental disability. Several of the companies Behm named ultimately agreed to revise or drop the questions they asked.

As high school students in the late 2000s, my classmates and I similarly discovered that job applications had a new feature—mandatory online personality tests like the quizzes we took for fun and shared all over Facebook. These personality tests bore less resemblance to quizzes assessing which anime character you most resembled than they did the kind of neuropsychological assessments used to diagnose a range of cognitive, developmental, and psychosocial disabilities. We attacked them with the same snarky enthusiasm, though, and submitted one application after another to a range of generally low-paid, hourly jobs mostly in retail and restaurants. One by one, my classmates, neighbors, and sister all received job offers. But despite applying to work in jobs as varied as tutor to library assistant to IT support to retail to office administrator, I was interviewed only once and I wouldn’t be hired anywhere until the summer after I graduated from high school. The employer was a tiny nonprofit that used a small grant to pay me for less than three weeks of work. My next regularly paying job wouldn’t start for more than another year until my sophomore year of college.

I never knew for sure what characteristics of my resume, application forms, or job search strategy landed dozens and dozens of my applications in the reject bin. Years later, I worked as a policy counsel at the Center for Democracy and Technology (CDT) on algorithmic bias and disability rights. I learned that automated personality tests were only one of an increasingly wide array of automated hiring tools available to recruiters and hiring managers. They now use these types of technologies to screen candidates in every sector, at every level of seniority, and for every type of job function. These tools tended to have an outsized discriminatory effect on job seekers with all types of physical and mental disabilities, including workers with aging-related disabilities. Truthfully answering questions in the personality tests that I took as part of job applications might have revealed responses consistent with the diagnostic criteria for depression, anxiety, and autism. Of course, there was no way that the hiring managers at the companies where I applied could have known that I was diagnosed as autistic at 13 years old.

Hiring discrimination has long impacted people in marginalized groups, but now discrimination can happen via algorithm without direct human involvement at all. After years of community advocacy against discriminatory hiring algorithms, the EEOC reached a first-of-its-kind settlement in August 2023 against iTutorGroup, a tutoring company, for programming its automated hiring tool to automatically reject all candidates over the age of 40 in violation of the Age Discrimination in Employment Act. This settlement comes on the heels of individual charges brought against employers for other discriminatory uses of automated hiring tools, like the ones Kyle Behm and his father brought, and signals a new frontier for labor law.

According to the industry association Society for Human Resource Management, about 79 percent of employers were using some kind of automated tool in their hiring process as of February 2022—and that was before generative artificial intelligence (AI) tools like ChatGPT were in the headlines. Proponents of automated hiring tools may argue that they increase efficiency, preventing hiring managers from having to sort through hundreds or even thousands of resumes from unqualified candidates, bringing only the best-qualified candidates to their attention instead. More insidiously, industry leaders claim that automated hiring tools increase equity by neutralizing the human factor in biased, discriminatory treatment.

These claims fail to acknowledge what advocates for technology justice and scholars of science, technology, and society alike have long warned against. Technology—even and especially algorithmic technology—does not exist apart from the social, cultural, and political context in which it is created. As scholar-activists like Ruha Benjamin, Safiya Noble, and Timnit Gebru have uncovered in varied applications, algorithmic technologies are built on and reflect the pre-existing biases and prejudices of the people and companies that create and purchase them. They also reflect biases in the data used to train them. One investigation of an automated resume screening algorithm, for instance, found that the two characteristics the algorithm most strongly associated with successful job performance were having the first name Jared (a name coded as white and male) and having played high school lacrosse (a sport that often connotes access to wealth privilege).

The iTutorGroup settlement revolves around the use of a resume screening algorithm rather than a personality test used to automatically filter applicants. These algorithms can be programmed to advance or reject candidates for any characteristics believed by hiring managers to indicate the likelihood of success or the presence of qualifications for a job. The algorithm could, for instance, screen a resume for a legal aid staff attorney job for language indicating active bar admission in at least one U.S. jurisdiction as well as possession of a J.D. or L.L.M. degree and automatically reject any applicants who would not meet an active licensure requirement. It could also screen a resume for desired supervision experience when considering candidates for a store’s general manager position.

But the algorithm could also deprioritize candidates with language in their resumes indicating involvement with women’s, LGBTQ, or racially or culturally marginalized group initiatives, such as volunteer experience with a transgender advocacy group, a scholarship from a Sikh organization, or membership in a Black women’s sorority. The algorithm might also penalize candidates for characteristics that are not facially discriminatory but that result in disproportionate disparate impact. They might penalize candidates for long gaps between jobs, which are more likely for members of marginalized communities facing discrimination in the workplace, as well as for people who are pregnant, returning from prison, or re-entering the workforce after long-term disability. They could also penalize candidates for lacking leadership experience, even though people from marginalized communities might be less likely to obtain that very experience due to discrimination and exclusionary workplace cultures. Even worse, the algorithm could decide from past data that workers from marginalized groups are less qualified and less likely to be successful precisely because similarly situated workers were more likely to face discrimination in hiring and promotions in the past.

In a report for the CDT, Ridhi Shetty and Michelle Richardson identified several emerging themes in algorithmic discrimination against people with disabilities in hiring. Some automated hiring tools—like gamified tests that assume a neurotypical, sighted candidate with an ordinary range of motion—are outright inaccessible for users with disabilities, and employers may not provide adequate alternative assessments, modifications, or accommodations. Others—like resume screening software and automated video voice analysis programs—tend to “screen out” otherwise qualified applicants with disabilities in violation of Title I of the Americans with Disabilities Act (ADA). Then there are hiring algorithms that function as unlawful medical examinations, like personality tests that use the same questions as clinical psychologists’ diagnostic questionnaires. Disability is a broad category encompassing many specific conditions and experiences, including acquired, episodic, temporary, and aging-related disabilities. Of course, not all older people have a readily identifiable disability, let alone one that would qualify for ADA protections.

Disability discrimination often co-occurs and overlaps with age-related discrimination against elders and youth alike, even for young people and elders who do not otherwise have disabilities. Disabled activists have fought against paternalism and the presumption of incompetence for decades as part of the disability rights movement and the newer, intersectional disability justice movement. Youth liberation and elder rights activists name similar themes when young or old age are deployed as a marker of presumed incapacity and incompetence. This societal presumption is entrenched in the legal institution of guardianship or conservatorship, which ostensibly exists to protect vulnerable people from predatory and exploitative behavior. Disability rights, youth rights, and elder rights groups ranging from the National Youth Rights Association to the National Disability Rights Network and Justice in Aging have all roundly critiqued the imposition of guardianship—or in the case of youth, the legal framework for minority—as unnecessarily oppressive and often abusive.

Disability and age-based discrimination in employment also share common patterns. Disabled, youth, and aging workers all face prejudicial perceptions of laziness, irresponsibility, and unreliability. A worker with a disability may be judged likely to engage in higher absenteeism, even though people with disabilities tend to accrue fewer absences on average than people without disabilities. Younger and aging workers may be judged more likely to leave a job sooner—to pursue another job or education, in the case of younger workers, or to retire or seek treatment for aging-related illness or disability, in the case of older workers. In fighting against hiring discrimination, advocates for disabled, young, and older people should be natural allies.

Similarly, disability justice advocates and scholars such as Sami Schalk, Jina Kim, and Leah Lakshmi Piepzna-Samarasinha have extensively described the ways in which environmental racism, police violence, and exploitative labor conditions coalesce to generate higher rates of disability and chronic illness in communities of color, in the LGBTQ community, and among poor people. Further, some of these experiences of disability are particular and specific to marginalized communities—such as compounded, complex trauma stemming from a lifetime of dealing with racism or anti-transgender discrimination, or workplace injuries and long-term illness accrued from hazardous work in garment factories or natural resource extraction. Employment discrimination also highlights the danger of economic inequality—without a reliable job and just working conditions, workers in the United States may struggle to access comprehensive, stable health care, housing, and food.

The discriminatory patterns evidenced in automated hiring tools can impact people in every marginalized community, with an exponentially negative impact on those who belong to more than one marginalized group. Existing civil rights laws that have formed the cornerstone of protections against employment discrimination provide a strong foundation for articulating claims of unlawful discrimination because of race, gender, age, disability, and other protected characteristics. Yet advocates continued to demand improved implementing regulations and sub-regulatory guidance, as well as new statutory protections aimed specifically at curbing discriminatory use of AI and algorithmic decision-making tools.

In November 2021, the New York City Council passed the Automated Employment Decision Tool Law (AEDT), which became effective on January 1, 2023. In April 2023, New York published its final regulations for implementing the law. But despite receiving widespread praise for passage of a first-of-its-kind law on AI hiring discrimination, the City Council failed to consult civil rights advocates, let alone directly impacted workers, in the drafting and markup process. That lack of engagement shows up in the final law’s language.

The law only stipulates that employers must conduct third-party bias audits for discrimination based on race, ethnicity, or sex. This language could enable employers to avoid auditing for discrimination based on disability, age, or multiple vectors of marginalized identities. The law also fails to require employers to provide sufficient information about how their automated tools function, what information they are collecting, or how they are assessing that information. Withholding this critical information from candidates with disabilities can prevent them from knowing whether they should request an accommodation or alternative assessment. Most concerning for labor rights advocates, however, is that companies might advertise their automated tools’ compliance with New York’s AEDT, further proliferating the market with tools that comply with the letter of the law but nonetheless discriminate against marginalized workers.

New York’s choice to address discrimination in automated hiring tools is a positive move, despite the flaws in the final bill. More state and local lawmakers may follow suit. Federal regulators have also taken note of the emerging issues around civil rights and automated recruitment and hiring tools. Beyond the recent settlement with iTutorGroup, the EEOC is moving in the right direction. Last year, the EEOC published joint guidance with the Department of Justice outlining the legal vulnerabilities of potentially discriminatory automated hiring technologies when evaluating disabled job candidates. Early in 2023, the EEOC named discriminatory hiring tech as an enforcement priority in its Strategic Enforcement Plan for 2023-2027.

In December 2022, a coalition of civil rights organizations including the American Civil Liberties Union, CDT, and the American Association for People with Disabilities, among others, authored the Civil Rights Standards for 21st Century Employment Selection Procedures. These standards offer specific protocols that employers as well as developers may consider implementing to better prevent discriminatory effects of their automated hiring tools. The protocols include specific auditing before adoption as well as throughout the life cycle of a tool’s use and recommendations for disclosure and transparency of audit results in language that ordinary people can understand. These protocols also address specific means by which job seekers might request accommodations or opt out of an automated tool’s use altogether while still receiving fair consideration for the job. Recommendations for avoiding discrimination also caution against the use of data that might serve as a proxy for a protected class, such as the use of a zip code that might correlate to socioeconomic data and also race, ethnicity, or disability status. But these standards—endorsed by several organizations focused on technology justice and disability rights—are not yet codified into law at the federal or state level.

Of course, there is a long way to go in addressing the dangerous discriminatory impact of algorithmic decision-making systems in all aspects of hiring and employment decisions. As new stories emerge, civil rights advocates may bring further actions against discriminatory recruitment, hiring, promotion, compensation, and workplace monitoring and discipline practices that rely on the use of algorithmic technologies. Civil rights attorneys and advocates often work in silos that inhibit effective alliances to address shared problems and common goals. Algorithmic decision-making tools represent a new frontier in anti-discrimination advocacy, but technologically enabled harms only reflect and amplify existing discriminatory attitudes, policies, and practices. As more workers come forward to share their experiences with discriminatory technologies, advocates for civil rights and social justice should in turn work in concert to address the deeply, inextricably interconnected harms of technologically accelerated race, gender, age, and disability-based discrimination. Unaddressed algorithmic harm in employment will only further entrench existing economic inequalities as reflected in indicators like employment, income, and net wealth for those in marginalized communities.

Kyle Behm’s story features prominently in the 2021 HBO documentary Persona, which explored the troublesome and legally questionable use of personality testing instruments in employment. While Behm and his father consulted on production, Behm died by suicide in 2019, two years before the documentary was released. His father continues to advocate against mental health and disability discrimination by algorithm. But most marginalized workers negatively affected by a hiring algorithm may never file an EEOC claim or speak to a reporter about their experiences—at least not while employers have no obligation to provide any meaningful notice, explanation, or opt-out process to job seekers and no obligation to regularly audit their software for discriminatory impact using external experts or to report the results of these audits and remediate accordingly.

As for me, I hope I never encounter another automated personality test when applying for a job again—and if I do, I plan to lie.