chevron-down Created with Sketch Beta.

Law Technology Today

2022

Legal Entanglements When Using AI For Hiring, Firing, And More

Lance Eliot

Summary

  • Modern-day companies and law firms are increasingly adopting AI-based apps that entail all manner of employee-related facets.
Legal Entanglements When Using AI For Hiring, Firing, And More
istockphoto.com/gremlin

Jump to:

Most firms are already somewhat aware of the legal boundaries associated with hiring and firing employees. U.S. laws and regulations have made a rather highly marked profile that employers are not to perform unlawful discriminatory acts. In short, applicants and existing employees are to be protected from undue biases whilst seeking employment and while serving on the job.

Into this mix comes Artificial Intelligence (AI).

You see, modern-day companies are increasingly adopting AI-based apps that entail all manner of employee-related facets. For example, there are AI applications for interviewing that materially take on the role of screening job candidates. There are AI programs that rank and assess employees so that an employer can decide whom to promote and whom to offload. The list of how AI is being infused into the conventional arena of managing human talent and aiding an HR (Human Relations) function is lengthy and rapidly expanding.

AI often is utilized as an aid or auxiliary tool by human managers that ultimately make employment or employee-related decision. Going beyond just being an assistant, some AI-based packages are being devised and being allowed to render employee and employment decisions without any human intervention at all. In this fast-paced world we live in, some companies are willing to relinquish key employment decisions to an AI algorithm, doing as part of the trend toward leveraging computational ADM (Algorithmic Decision Making) in the workplace.

AI Must Be Bounded By Legal Obligations

Savvy lawyers know that just because AI is being used, this doesn’t magically waive the legal obligations of the employer concerning unlawful employment discrimination acts.

Regrettably, many top executives and other leaders in companies seem to think that AI absolves them of such qualms. Or, there is a brazen and typically false assumption made that the AI is entirely neutral and miraculously sweeps away any chances of discriminatory woes. Adding fuel to this fire is that some vendors opt to tout that their AI systems will indeed “cure” any semblance of discriminatory leanings within a company. AI is pitched as a silver bullet for coping with human-induced biases among their managers that otherwise might arise when humans make employee and employment decisions.

A recent discussion with Keith Sonderling, Commissioner of the U.S. Equal Employment Opportunity Commission (EEOC) that was undertaken by the Center for AI and Digital Policy (CAIDP) revealed vital insights about being on the watch for unbridled uses of AI and ADM for employment decisions. The discussion was led by the CAIDP Research Director Merve Hickok and in part by Marc Rotenberg, President, and Founder of CAIDP. The web session occurred on September 29, 2022, and a recording of the discussion is available on the CAIDP website.

Commissioner Sonderling is known as being a pioneer in the legal ramifications of using AI in the workplace. Having been confirmed to the EEOC in 2020 on a bipartisan vote, he has diligently undertaken his duties and included a particular priority toward AI and the abidance of civil rights laws. He is a seasoned lawyer that has extensively practiced Labor and Employment law. Prior to his EEOC role, he had been the Acting and Deputy Administrator of the Wage and Hour Division at the federal Department of Labor.

One important point made during the invigorating discussion was that AI can end up encompassing the entirety of the job cycle. This is not merely a hiring or firing consideration alone. Per Sonderling’s comments, AI can be used for determining promotions, choosing who gets trained and who does not, used to ascertain wages and benefits, and otherwise be pervasive across all touchpoints with employees. The same can be said about the employment hiring process, namely that AI could be used throughout all means of contact during candidate recruitment and selection efforts.

Employers need to be carefully minding the store when it comes to such AI usage.

Remain Calm And Measured About AI Usage

The AI is not necessarily bias-free. In fact, much of today’s AI is based on Machine Learning that patterns after company data, which can ergo carry over whatever preexisting biases had riddled a firm to begin with. This in turn becomes part and parcel of the AI ongoing capabilities for that company.

Ironically, instead of potentially expunging discriminatory acts, AI provides the potential to accelerate and greatly scale-up the unlawful biased efforts. A firm that realizes it can use AI to henceforth screen many thousands of prospective employees is likely to cast a wider net than it had ever before attempted to do (being unable previously to afford the human labor costs associated with a massive scale applicant search).

Another possible downside of AI is that anyone believing they might have been subject to a discriminatory act might have a harder time detecting and making the case that they were treated unlawfully. The AI and ADM might be programmed in a convoluted manner that does not lend itself to being scrutinized for internal biasing coding. In addition, the AI interaction with an employee or candidate might appear to be completely neutral and unbiased (the program was intentionally coded by AI developers to achieve that kind of look and feel). If a human manager was doing an akin interaction, the odds are that an employee or applicant might be able to detect subtle clues and mannerisms that the manager harbored a discriminatory bias. This is not usually the case with shrewdly devised AI.

A reaction by some to these AI drawbacks involves impulsive exhortations that firms should not adopt AI for any workplace-related activities. But this type of head-in-the-sand advice is sorely lacking and utterly unrealistic.

Per the remarks of Commissioner Sonderling, AI is here to stay. Contemporary firms relish and are becoming dependent upon the benefits and advantages of using AI. We are not going to suddenly turn back the clock. The time is assuredly now to tackle these thorny issues.

AI provides tremendous upsides, while simultaneously presenting unseemly perils.

Conclusion

Lawyers need to become cognizant of what AI can and cannot do.

Having an understanding of how AI systems are devised and customized to the workplace is a vital consideration for lawyers that are advising firms about employment practices. Do not get caught off-guard by AI myths that are used to garner AI adoption. Know too how to represent individuals that believe they might have been subject to AI-based discriminatory actions. And, even the vendors, they too need AI-savvy lawyers that can advise them about the design, marketing, and fielding of their AI wares. Vendors are on the hook as much as employers for the violation of fundamental civil rights that their AI might venture sourly into.

Old-fashioned discrimination in the workplace is in the midst of getting upgraded to AI-based discrimination, though this doesn’t have to be the case. With the right kind of legal guidance, employers and vendors can safeguard their AI systems to avoid veering into the discriminatory abyss. Attorneys that get up-to-speed on AI in the law are finding themselves becoming sought after as the AI juggernaut just keeps on growing.

Dr. Lance Eliot is a globally recognized expert in AI & Law and serves as a Stanford Fellow at the Stanford Law School and Stanford Computer Science Department in affiliation with the Stanford Center for Legal Informatics. Previously, he was a professor at the University of Southern California (USC) where he was the Executive Director of a pioneering AI research lab. He has also been a top executive at a major Venture Capital firm and a worldwide tech executive at several large-sized companies. As a successful entrepreneur, he has started, run, and sold several high-tech startups. He serves on AI & Law committees for the World Economic Forum (WEF), ITU United Nations, IEEE, NIST, and other entities. Dr. Eliot writes a popular column on AI & Law+Ethics for Forbes that has amassed over 6.8+ million views. For more about AI & Law, see the top-rated popular book by Dr. Eliot entitled “AI And Legal Reasoning Essentials” which is available via Amazon and other online booksellers.

    Authors