chevron-down Created with Sketch Beta.
February 01, 2017 Features

AI in HR: Civil Rights Implications of Employers’ Use of Artificial Intelligence and Big Data

By Matthew Scherer

Ever since the financial crisis and Great Recession shook the labor market, companies’ human resources (HR) departments have become a popular target for layoffs and budget cuts as businesses seek to control costs. Some companies have reduced HR staff by more than 50 percent or outsourced HR services altogether. It seems all but inevitable that the downward pressure on HR departments will soon collide with another unmistakable trend in corporate America—companies’ increasing use of artificial intelligence systems and “big data.”

“Big data” is a term commonly used to describe both the massive amounts of data available to companies in the information age and the various methods for sifting through that data. Many of these methods are grouped under the equally loose term “artificial intelligence” (AI), which is often used to describe, among other things, systems that use and construct algorithms to identify patterns and relationships within data. One major branch of artificial intelligence, called “machine learning,” describes a process by which AI systems can actually improve their analytical and predictive capabilities when exposed to new data without being explicitly programmed to do so.

The potential significance of these emerging fields to the world of HR is obvious. The Internet, social media platforms, and public databases contain a plethora of potential information about each employee and applicant. Big data thus gives companies an opportunity to combine “traditional information such as work experience and education with nontraditional data including consumer and financial data and internet browsing history” in order to “sketch an ideal candidate for a job and weigh how well an applicant would fit an opening.”1 The same techniques can be used to evaluate the past and potential future performance of current employees.

Likewise, the appeal of AI will be immense as HR budgets come under increasing downward pressure. Automating initial assessments of applicants’ qualifications and employees’ performance would both reduce costs and remove the human biases and prejudices that all too often affect personnel decisions. As a result, and as U.S. Equal Employment Opportunity Commission (EEOC) Chair Jenny Yang noted at a recent public meeting, these recent technological developments have “the potential to drive innovations that reduce bias in employment decisions and help employers make better decisions in hiring, performance evaluations, and promotions.”2

That, at least, is the promise. But delegating traditional HR tasks to automated systems might prove far more complex than the hype might suggest. Even though AI offers the tantalizing hope of reducing discrimination in the labor market, it likely will prove exceptionally difficult to program AI systems to make personnel decisions that reliably conform with state and federal antidiscrimination laws.

Civil Rights Concerns with AI

Start with the basics. Antidiscrimination laws generally prohibit two different types of discrimination. The first type of discrimination is called disparate treatment, which, as those words suggest, prohibits treating people differently because they are a member of a protected class.3 Examples of disparate treatment might include refusing to hire anyone who was not born in the United States (disparate treatment based on national origin) or promoting only men to high executive positions at a company (disparate treatment based on gender). Such disparate treatment is what people usually think about when they hear the term “discrimination.”

But Title VII also prohibits policies that have an adverse disparate impact on members of a protected class.4 A widely recognized example of disparate impact discrimination is adopting a policy against hiring people who have a criminal record. While that policy seems neutral on its face, if the employer fails to establish appropriate safeguards, the policy may disproportionately result in the exclusion of racial and ethnic minorities.5

Courts have long recognized that these two prohibitions can pull in different directions.6 Facially neutral and seemingly meritocratic criteria such as educational attainment, prior work experience, and salary history7 can mask the fact that those criteria are often themselves the products of previous discrimination and therefore heavily correlated with protected characteristics. White applicants are more likely to have graduated from college than black or Hispanic applicants, for instance. As a result, an unyielding reliance on such “objective” criteria can lead to personnel decisions that have an illegally disparate impact.

However, the most obvious ways to reduce the disparate impact of such hiring practices often are themselves illegal because they constitute disparate treatment. Perhaps the simplest way to ensure diversity would be to use a quota—e.g., by setting aside a certain proportion of the positions for applicants from underserved minority groups. But the prohibition against disparate treatment means that companies cannot use such quotas, even when the reason for implementing is not to discriminate but instead to ensure a diverse workforce. Likewise, companies cannot use numerical assessments that explicitly assign “bonus points” to individuals from disadvantaged groups.8

Nothing prevents an employer from using race as a subjective “plus factor” to help ensure diversity. But it cannot assign a specific numerical value to the race or gender of an applicant. In other words, the law prefers to keep employment-related assessments subjective when they involve sensitive personal characteristics such as race and gender.

Challenges of Lawfully Using Algorithm-Driven Systems

Which brings us back to AI. It is difficult, if not impossible, to directly program a truly subjective set of criteria into AI systems that use algorithms to analyze data and make predictions—a description that encompasses virtually all AI systems in commercial use today. Algorithm-driven AI systems are invariably run through digital computers that operate using binary code. This makes them singularly ill-suited for making truly subjective assessments because a “subjective algorithm” is almost a contradiction in terms; using an algorithm necessarily requires that “subjective” criteria be formalized in some way so that they can be reduced to computer code and incorporated into the algorithm. As a result, while you can design an algorithm that closely approximates or simulates a subjective assessment, you still have to find a way to reduce the subjective assessment to an objective and concrete form.

Therein lies the problem with delegating personnel decisions to an AI system. As noted above, when companies rely exclusively upon seemingly objective sets of desirable criteria to make personnel decisions, disparate impact discrimination may result because the desirable criteria are often themselves the result of the structural advantages enjoyed by people from dominant social and economic groups. But courts would likely rule that companies cannot counteract the disparate impact by directly programming an AI system to rate people from disadvantaged groups more favorably.

The rise and increasing sophistication of machine learning would seem to offer some hope of avoiding these pitfalls. Whereas earlier algorithm-based AI systems needed humans to explicitly tell them which criteria matter, learning AI systems are provided with “seed sets” or “training sets” of data that they use to identify the criteria that might have predictive value. For an AI system to learn how to identify cats in photographs, for example, it is given a set of images and told “these images contain cats” and “these images do not contain cats.” By comparing the “cat” and “not a cat” images, the system can discover features (e.g., size, coloring, shape) that are distinctive to cats. Armed with this knowledge, the system can then examine other photographs and predict whether they contain cats by looking for the cat-distinctive features it learned from the training set.

But as that description suggests, machine learning systems are only as good as the seed sets humans provide them. Unless the system is given representative and well-curated data to learn from, its output will not be reliable—“garbage in, garbage out,” as they say. In the example above, if the only “cat” images in the seed set are photographs of orange tabby cats, the system is likely to develop an algorithm under which it marks every cat that lacks an orange tabby fur pattern as “not a cat.”

In HR settings, AI systems will be vulnerable to the same sort of “seed set bias.” A company wishing to identify desirable candidates for a sales position might wish to have a seed set that consists of the current employees with the best sales performance. But if this seed set is dominated by white men, then the AI might conclude that being white or male is itself a positive and desirable trait.

Even if the system is explicitly programmed to ignore protected characteristics, such as race and gender, the system still might focus on criteria that nevertheless are heavily influenced by—and therefore the product of—previous discrimination. Consequently, if the seed set of high-performing incumbents is not sufficiently diverse, the characteristics of the employees who make up that seed set “may more reflect their demographics than the skills or abilities needed to perform the job.”9 These dangers are especially acute in the age of big data, which gives companies access to credit scores10 and other nontraditional data points that might be irrelevant as predictors of future performance but strongly correlated with protected characteristics, such as race and gender.

The Path Forward

None of this is meant to suggest that it is impossible to incorporate AI into HR. Rather, it demonstrates that when AI systems are used in HR settings, they will need to be “trained” and supervised to ensure their decisions and recommendations comply with antidiscrimination laws—which, of course, is also true of human HR workers. With an AI system, this means that both the inputs (seed sets) and outputs (personnel recommendations and decisions) of the system must be reviewed regularly. Managers must also be trained to understand both the capabilities and limitations of each system, and taught how to interpret the outputs each system generates.

The task of training and supervising AI in HR might seem daunting, but many of the tools needed to monitor AI systems’ compliance with antidiscrimination laws already exist. Most notably, a number of commercially available software products allow companies to analyze their affirmative action plans (AAPs) and personnel decisions to check for potential disparate impacts.11 Companies could easily use such AAP software to audit an AI system’s outputs—or the AI system’s designer could build such auditing features into the AI system itself. In either case, if an audit suggests that the AI system’s algorithms are having a disparate and adverse impact on protected groups, the system’s seed sets should be analyzed to identify the cause of the potential disparate impact; the seed sets then could be modified accordingly.

Indeed, companies should not wait until a problem arises to conduct such reviews. The demographics of the labor market are constantly changing, as are companies’ performance expectations for their employees. Companies would be well advised to review, update, and curate the seed sets on a regular basis to ensure they remain suitably representative and reflect the company’s current goals, regardless of whether audits reveal a specific problem. As with human HR workers, some form of continuing education will be necessary for AI in HR.

Conclusion

The promise of AI and big data is real. Used properly, these technological developments can help identify the characteristics that are truly important in predicting the suitability of a particular candidate for a particular position. Over time, these advances should help create a labor market that is more efficient and less afflicted by human limitations and biases.

But despite this promise, it is critical to recognize that AI has not yet advanced to anything close to the point where companies can blindly rely on AI systems to make personnel decisions. In that sense, we are not yet ready to take humans out of human resources. ◆

Endnotes

1. Braden Campbell, Employers Must Use Analytics with Care, EEOC Panel Warns, Law360 (Oct. 13, 2016), https://www.law360.com/articles/851267/employers-must-use- analytics-with-care-eeoc-panel-warns.

2. Press Release, U.S. Equal Emp’t Opportunity Comm’n, Use of Big Data Has Implications for Equal Employment Opportunity, Panel Tells EEOC (Oct. 13, 2016), https://www.eeoc.gov/eeoc/newsroom/release/10-13-16.cfm.

3. The characteristics protected by antidiscrimination laws can vary. At the federal level, Title VII of the Civil Rights Act of 1964, 42 U.S.C. §§ 2000e et seq., protects against discrimination based on race, color, religion, sex, or national origin. Separate statutes—the Age Discrimination in Employment Act (ADEA), 29 U.S.C. §§ 621 et seq., and Americans with Disabilities Act (ADA), 42 U.S.C. §§ 12111 et seq.—prohibit discriminating against people due to age and disability.

4. Green v. Mo. Pac. R.R. Co., 523 F.2d 1290, 1296 (8th Cir. 1975).

5. See EEOC Enforcement Guidance, No. 915.002, Consideration of Arrest and Conviction Records in Employment Decisions under Title VII of the Civil Rights Act of 1964 (2012), available at https://www.eeoc.gov/laws/guidance/arrest_conviction.cfm.

6. See, e.g., Ricci v. DeStefano, 557 U.S. 557, 581 (2009) (prohibiting employers from engaging in disparate treatment in an effort to avoid disparate impact liability).

7. Recognizing that employers’ use of salary history in the hiring process may have an adverse impact, several jurisdictions have introduced equal pay legislation that bars such inquiries. A Massachusetts law that takes effect in July 2018, the Act to Establish Pay Equity (S. 2119), prohibits employers from requiring job candidates to divulge their prior salary. The District of Columbia introduced similar legislation, the Fair Wage Amendment Act of 2016 (B21-0878), in September 2016. California Governor Jerry Brown recently signed Assembly Bill 1676, which prohibits pay differentials based solely on prior salary history. Similar legislation has been introduced in New York (A. 5982) and New Jersey (S. 2536). Local governments have also taken action. In November 2016, New York City Mayor Bill de Blasio signed Executive Order 21 prohibiting city agencies from inquiring about job applicants’ salary history. Philadelphia’s recently enacted Wage Equity Ordinance goes even further, barring all employers from requesting salary history effective May 23, 2017.

8. See 42 U.S.C. § 2000e-2(l) (prohibiting employers from altering the results of “employment related tests on the basis of race, color, religion, sex, or national origin”). In the public sector, the Supreme Court has also held that such “bonus point” systems violate the Equal Protection Clause of the Fourteenth Amendment. See Gratz v. Bollinger, 539 U.S. 244 (2003).

9. EEOC Public Meeting on Big Data in the Workplace (Oct. 13, 2016) (written testimony of Kathleen K. Lundquist, President & CEO, APTMetrics, Inc.), available at https:// www.eeoc.gov/eeoc/meetings/10-13-16/ lundquist.cfm.

10. Numerous states, including California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Maryland, Nevada, Oregon, Vermont, and Washington already bar the use of credit history for employment purposes, with narrow exceptions. Similar federal legislation was introduced in Congress in 2013. The EEOC has expressed the belief that relying on applicants’ credit histories may disproportionately exclude minority groups, and has sued employers on that basis. See, e.g., EEOC Informal Discussion Letter, Title VII: Employer Use of Credit Checks (Mar. 9, 2010), available at https://www.eeoc.gov/eeoc/foia/letters/ 2010/titlevii-employer-creditck.html.

11. See, e.g., AAP Development Software, myAAP Solution, http://www.affirmativeaction.com/aap-software.html (last visited Feb. 14, 2017); Affirmative Action Planning Software, PeopleFluent, http://www.peoplefluent.com/products/affirmative-action- planning-solutions/affirmative-action- planning-software (last visited Feb. 14, 2017); BALANCEaap Software, Berkshire Associates Inc., http://www.berkshireassociates.com/balanceaap (last visited Feb. 14, 2017); Software/Tools, EEO Made Simple Consulting, http://www.eeo-madesimple.com/index.php?main_page=index&cPath=2 (last visited Feb. 14, 2017).

Matthew Scherer

Associate with Buchanan Angeli Altschul & Sullivan LLP

Matthew Scherer ([email protected]) is an associate with Buchanan Angeli Altschul & Sullivan LLP and a blogger on artificial intelligence issues at www.lawandai.com.