chevron-down Created with Sketch Beta.

ARTICLE

Artificial Intelligence in the Workplace: What Employers Need to Know

Jerry M. Cutler

Summary

  • With technological developments have come concerns that AI employment tools will result in discriminatory treatment of applicants.
  • The National Artificial Intelligence Initiative Act created an advisory committee to provide recommendations on AI security, privacy, and civil-rights standards.
  • Employers are well advised to develop formal procedures for the use of AI and related technology in making hiring decisions.
Artificial Intelligence in the Workplace: What Employers Need to Know
aerogondo via Getty Images

Advancements in technology have enabled companies to more efficiently analyze data for use in identifying and assessing options and making decisions. In the context of recruitment and hiring, companies are using artificial intelligence (AI) and machine learning technologies to improve applicant screening and selection outcomes. This includes the use of automated tools such as resume scanners, “chatbots,” and video interviewing software to review applicant resumes, measure skills and abilities, determine qualifications, or otherwise assess an applicant’s suitability for a position. 

Legislative Initiatives

With these technological developments have come concerns that AI employment tools will result in discriminatory treatment of applicants. Some state and local governments have responded by enacting legislation to regulate how AI is used in making employment decisions. For example, the Illinois Artificial Intelligence Video Interview Act requires employers relying on an AI analysis of video interviews to report applicant demographic data. New York City’s newly enacted AI law makes it unlawful for an employer to use an AI tool to screen applicants unless a bias audit has been conducted to determine whether it has a disparate impact on individuals based on their protected status.

Federal Guidelines

The National Artificial Intelligence Initiative Act, which was enacted by Congress in 2020, created an advisory committee to provide recommendations on AI security, privacy, and civil-rights standards. In October 2021, the Equal Employment Opportunity Commission (EEOC) announced an initiative to ensure that AI and other emerging applicant-screening technologies conform to federal civil-rights laws. This includes developing guidelines and best practices on the use of AI in making employment decisions.

The EEOC and U.S. Department of Justice, Civil Rights Division (DOJ) each issued guidance in May 2022 on the use of AI and related technologies to assess job candidates. The agencies explain that an AI hiring tool that screens out an individual with a disability who can perform the essential functions a job with a reasonable accommodation may violate the Americans with Disabilities Act (ADA). The EEOC and DOJ therefore advise employers using AI tools to provide a reasonable accommodation to applicants who have a physical or mental condition that may make it more difficult to take a test or result in a less than favorable assessment. The agencies offer examples of practices that employers can implement to ensure that applicants receive needed accommodations:

  • explaining the type of AI technology being used and how applicants will be evaluated
  • notifying applicants that reasonable accommodations (including alternative testing modalities) are available to individuals with disabilities
  • providing sufficient information to applicants so they can decide whether to seek an accommodation and implementing procedures for requesting an accommodation

The White House addressed the use of AI technology in an October 2022 white paper on “algorithmic discrimination,” i.e., discriminatory conduct in which automated hiring tools contribute to differential treatment based on an individual’s race, color, ethnicity, sex, pregnancy, age, national origin, disability, or other protected status. It includes several principles and practices intended to protect the rights of individuals with regard to the design, use, and deployment of AI:

  • Discrimination protections: assessing automated systems to ensure accessibility for people with disabilities and mitigate any disparate impact. It is recommended that the testing results and mitigation information be made public whenever possible to confirm these protections.
  • Notification: providing notice that an automated system is being used and giving an explanation as to how and why it contributes to certain outcomes. The explanations should be technically correct and useful to anyone who needs to understand the system.
  • Data privacy: seeking permission before collecting, using, or transferring data from an automated system, and ensuring reasonable privacy expectations by only collecting data necessary for specific purposes.
  • Human review: allowing individuals to opt out of an automated system and instead be assessed by a person with appropriate training to conduct the assessment.

Suggestions for Employers

In light of the above, employers are well advised to develop formal procedures for the use of AI and related technology in making hiring decisions. The procedures should contain a mechanism to assess the legal and ethical implications of AI, a process for requesting reasonable accommodations, notification and data-privacy protocols, and an ongoing analysis of potential bias.

  • Assessment: Employers should begin with an assessment of the legal risks associated with the use of AI technologies, and a determination of the ethical standards that will be applied in deciding what constitutes any resulting discriminatory impact. The assessment should also determine whether these automated processes result in “implicit” or “unconscious” bias.
  • Reasonable accommodations: Applicants should be notified that reasonable accommodations (ex. alternative testing tools) are available to individuals with disabilities and provided sufficient information with which to decide whether to seek an accommodation.
  • Notification and data privacy: Applicants should receive an explanation of the technology being used and how it will impact their assessment. To ensure reasonable privacy expectations, data collection should be limited to that which is necessary to make a hiring decision, and should only be shared with individuals who have a need for it.
  • Ongoing bias audit: Periodically assess automated systems to ensure accessibility for people with disabilities, and determine whether any AI tools have a disparate impact on applicants based on their protected status under the law.

Resources

    Author