In a report for the CDT, Ridhi Shetty and Michelle Richardson identified several emerging themes in algorithmic discrimination against people with disabilities in hiring. Some automated hiring tools—like gamified tests that assume a neurotypical, sighted candidate with an ordinary range of motion—are outright inaccessible for users with disabilities, and employers may not provide adequate alternative assessments, modifications, or accommodations. Others—like resume screening software and automated video voice analysis programs—tend to “screen out” otherwise qualified applicants with disabilities in violation of Title I of the Americans with Disabilities Act (ADA). Then there are hiring algorithms that function as unlawful medical examinations, like personality tests that use the same questions as clinical psychologists’ diagnostic questionnaires. Disability is a broad category encompassing many specific conditions and experiences, including acquired, episodic, temporary, and aging-related disabilities. Of course, not all older people have a readily identifiable disability, let alone one that would qualify for ADA protections.
Disability discrimination often co-occurs and overlaps with age-related discrimination against elders and youth alike, even for young people and elders who do not otherwise have disabilities. Disabled activists have fought against paternalism and the presumption of incompetence for decades as part of the disability rights movement and the newer, intersectional disability justice movement. Youth liberation and elder rights activists name similar themes when young or old age are deployed as a marker of presumed incapacity and incompetence. This societal presumption is entrenched in the legal institution of guardianship or conservatorship, which ostensibly exists to protect vulnerable people from predatory and exploitative behavior. Disability rights, youth rights, and elder rights groups ranging from the National Youth Rights Association to the National Disability Rights Network and Justice in Aging have all roundly critiqued the imposition of guardianship—or in the case of youth, the legal framework for minority—as unnecessarily oppressive and often abusive.
Disability and age-based discrimination in employment also share common patterns. Disabled, youth, and aging workers all face prejudicial perceptions of laziness, irresponsibility, and unreliability. A worker with a disability may be judged likely to engage in higher absenteeism, even though people with disabilities tend to accrue fewer absences on average than people without disabilities. Younger and aging workers may be judged more likely to leave a job sooner—to pursue another job or education, in the case of younger workers, or to retire or seek treatment for aging-related illness or disability, in the case of older workers. In fighting against hiring discrimination, advocates for disabled, young, and older people should be natural allies.
Similarly, disability justice advocates and scholars such as Sami Schalk, Jina Kim, and Leah Lakshmi Piepzna-Samarasinha have extensively described the ways in which environmental racism, police violence, and exploitative labor conditions coalesce to generate higher rates of disability and chronic illness in communities of color, in the LGBTQ community, and among poor people. Further, some of these experiences of disability are particular and specific to marginalized communities—such as compounded, complex trauma stemming from a lifetime of dealing with racism or anti-transgender discrimination, or workplace injuries and long-term illness accrued from hazardous work in garment factories or natural resource extraction. Employment discrimination also highlights the danger of economic inequality—without a reliable job and just working conditions, workers in the United States may struggle to access comprehensive, stable health care, housing, and food.
The discriminatory patterns evidenced in automated hiring tools can impact people in every marginalized community, with an exponentially negative impact on those who belong to more than one marginalized group. Existing civil rights laws that have formed the cornerstone of protections against employment discrimination provide a strong foundation for articulating claims of unlawful discrimination because of race, gender, age, disability, and other protected characteristics. Yet advocates continued to demand improved implementing regulations and sub-regulatory guidance, as well as new statutory protections aimed specifically at curbing discriminatory use of AI and algorithmic decision-making tools.
In November 2021, the New York City Council passed the Automated Employment Decision Tool Law (AEDT), which became effective on January 1, 2023. In April 2023, New York published its final regulations for implementing the law. But despite receiving widespread praise for passage of a first-of-its-kind law on AI hiring discrimination, the City Council failed to consult civil rights advocates, let alone directly impacted workers, in the drafting and markup process. That lack of engagement shows up in the final law’s language.
The law only stipulates that employers must conduct third-party bias audits for discrimination based on race, ethnicity, or sex. This language could enable employers to avoid auditing for discrimination based on disability, age, or multiple vectors of marginalized identities. The law also fails to require employers to provide sufficient information about how their automated tools function, what information they are collecting, or how they are assessing that information. Withholding this critical information from candidates with disabilities can prevent them from knowing whether they should request an accommodation or alternative assessment. Most concerning for labor rights advocates, however, is that companies might advertise their automated tools’ compliance with New York’s AEDT, further proliferating the market with tools that comply with the letter of the law but nonetheless discriminate against marginalized workers.
New York’s choice to address discrimination in automated hiring tools is a positive move, despite the flaws in the final bill. More state and local lawmakers may follow suit. Federal regulators have also taken note of the emerging issues around civil rights and automated recruitment and hiring tools. Beyond the recent settlement with iTutorGroup, the EEOC is moving in the right direction. Last year, the EEOC published joint guidance with the Department of Justice outlining the legal vulnerabilities of potentially discriminatory automated hiring technologies when evaluating disabled job candidates. Early in 2023, the EEOC named discriminatory hiring tech as an enforcement priority in its Strategic Enforcement Plan for 2023-2027.
In December 2022, a coalition of civil rights organizations including the American Civil Liberties Union, CDT, and the American Association for People with Disabilities, among others, authored the Civil Rights Standards for 21st Century Employment Selection Procedures. These standards offer specific protocols that employers as well as developers may consider implementing to better prevent discriminatory effects of their automated hiring tools. The protocols include specific auditing before adoption as well as throughout the life cycle of a tool’s use and recommendations for disclosure and transparency of audit results in language that ordinary people can understand. These protocols also address specific means by which job seekers might request accommodations or opt out of an automated tool’s use altogether while still receiving fair consideration for the job. Recommendations for avoiding discrimination also caution against the use of data that might serve as a proxy for a protected class, such as the use of a zip code that might correlate to socioeconomic data and also race, ethnicity, or disability status. But these standards—endorsed by several organizations focused on technology justice and disability rights—are not yet codified into law at the federal or state level.
Of course, there is a long way to go in addressing the dangerous discriminatory impact of algorithmic decision-making systems in all aspects of hiring and employment decisions. As new stories emerge, civil rights advocates may bring further actions against discriminatory recruitment, hiring, promotion, compensation, and workplace monitoring and discipline practices that rely on the use of algorithmic technologies. Civil rights attorneys and advocates often work in silos that inhibit effective alliances to address shared problems and common goals. Algorithmic decision-making tools represent a new frontier in anti-discrimination advocacy, but technologically enabled harms only reflect and amplify existing discriminatory attitudes, policies, and practices. As more workers come forward to share their experiences with discriminatory technologies, advocates for civil rights and social justice should in turn work in concert to address the deeply, inextricably interconnected harms of technologically accelerated race, gender, age, and disability-based discrimination. Unaddressed algorithmic harm in employment will only further entrench existing economic inequalities as reflected in indicators like employment, income, and net wealth for those in marginalized communities.
Kyle Behm’s story features prominently in the 2021 HBO documentary Persona, which explored the troublesome and legally questionable use of personality testing instruments in employment. While Behm and his father consulted on production, Behm died by suicide in 2019, two years before the documentary was released. His father continues to advocate against mental health and disability discrimination by algorithm. But most marginalized workers negatively affected by a hiring algorithm may never file an EEOC claim or speak to a reporter about their experiences—at least not while employers have no obligation to provide any meaningful notice, explanation, or opt-out process to job seekers and no obligation to regularly audit their software for discriminatory impact using external experts or to report the results of these audits and remediate accordingly.
As for me, I hope I never encounter another automated personality test when applying for a job again—and if I do, I plan to lie.