What is a Bias Audit?
Generally, in the context of machine learning or artificial intelligence tools, a bias audit is an impartial evaluation that tests the tool’s disparate impact upon protected classes (such as race, ethnicity, sex, or disability).
In 2019, Illinois led the way with one of the country’s first AI workplace laws, the Artificial Intelligence Video Interview Act, which applies to all employers that use an AI tool to analyze video interviews of applicants for positions based in Illinois. The law requires that employers make certain disclosures and obtain consent from applicants if the employers are using artificial intelligence enabled video interview technology during the hiring process. The law further requires that employers that rely solely on AI to make certain interview decisions maintain records of demographic data, including the applicants’ race and ethnicity. Employers must submit that data on an annual basis to the state, which must conduct an analysis to determine if there was racial bias in the use of the AI. Maryland followed with a similar law in 2020, restricting employers’ use of facial recognition services during preemployment interviews until an employer receives consent from the applicant.
On the heels of the Illinois and Maryland AI laws, the New York City Council introduced legislation that will become effective on January 1, 2023, and is specifically focused on regulating AI associated with typical Human Resources technology, including employers’ use of “automated employment decision tools,” unless the tool has undergone a bias audit. Before such a tool is used to screen a candidate or employee for an employment decision, the employer must first notify the individual that the tool will be used, identify the job qualifications and characteristics that the tool will use in its assessment, and make publicly available on its website a summary of the bias audit and the distribution date of the tool. The candidate or employee has the right to request an alternative selection process or accommodation upon notification of use of the tool.
While forward-looking, these first AI laws have not been without criticisms. Concerns include that these initial laws fail to include a private right of action, may lack clear definitions of what technology is considered “artificial intelligence,” and may exclude members of protected groups (for example, the NYC law requires an audit only for discrimination on the basis of race or gender).
This is why some have heralded the pending legislation in the District of Columbia, the Stop Discrimination by Algorithms Act, which goes a step further by allowing a private right of action for individual plaintiffs, including potential punitive damages and attorney’s fees. The D.C. legislation, if passed, would prohibit covered entities from making an algorithmic eligibility determination on the basis of an individual’s or class of individuals’ actual or perceived race, color, religion, national origin, sex, gender identity or expression, sexual orientation, familial status, source of income, or disability in a manner that segregates, discriminates against, or otherwise makes important employment opportunities unavailable to an individual or class of individuals.
In Spring 2022, the California Fair Employment and Housing Council proposed revisions to the state’s non-discrimination laws with regard to employers or agencies that use or sell services with AI, supervised machine learning or automated decision making system (ADS). The proposed regulations would expand employers’ record keeping requirements by requiring them to include machine-learning data as part of their records, and for employers or agencies using ADS, to retain records of assessment criteria used by the ADS. At publication, the regulations are in the pre-rule making phase.
It is not only state and local governments that have set out to address AI in the workplace.
In May 2021, Senator Edward J. Markey (D-Mass) and Congresswoman Doris Matsui (CA-06), introduced the Algorithmic Justice and Online Platform Transparency bill, which proposes a cross-government investigation into discriminatory algorithmic processes throughout the economy. The legislation goes beyond just addressing AI in the workplace and seeks to prohibit “algorithmic processes on online platforms that discriminate on the basis of race, age, gender, ability and other protected characteristics,” among other protections. At publication, the bill is pending in committee.
Federal agencies have also taken aim at addressing AI in the workplace. Following a December 2020 joint letter from ten U.S. senators to the Equal Employment Opportunities Commission (EEOC) calling on the EEOC to use its powers under Title VII of the Civil Rights Act of 1964 to “investigate and/or enforce against discrimination related to the use of” AI hiring technologies, the EEOC announced an initiative on AI in October 2021. As part of the initiative, the EEOC pledged to ensure artificial intelligence and other emerging technology tools used for employment decisions are compliant with federal civil rights laws, resulting in the agency issuing its first technical assistance document in May 2022, and emphasizing that the bias does not need to be intentional to be illegal. on AI in the workplace and implications on ADA in May 2022.
Where Do We Go From Here? The Practical Considerations
With the EEOC and DOJ’s recent introduction of AI workplace guidance, employers now have guideposts to use and ensure that their workplace practices are consistent with these federal policies.
For employers not operating in a capacity or in a location already affected by enacted AI legislation, they can still look ahead to the future and minimize exposure to litigation by ensuring that any algorithmic tools being utilized in the workplace, including those from a third party, can successfully pass a bias audit by an independent auditor. As more states and local governments continue to propose and enact legislation governing the use of AI in the workplace, and federal regulations develop, employers should continue to stay apprised of these developments that may impact the legality of algorithmic tools in the workplace in the future.