chevron-down Created with Sketch Beta.
June 15, 2023 Feature

The European Union’s Proposed Artificial Intelligence Regulation on Recruiting and Hiring Processes

Lillian Haley

Technology has become integrated in nearly every aspect of our daily lives, thereby changing how we proceed with almost everything, and the adaptation of computers and their ability to apply and utilize algorithms has sped up the completion of tasks monumentally. For example, in customer service occupations, computing technology can streamline the call center stage by processing tons of data and search records immediately These technological advancements have also infiltrated the way we work. All of this is due to artificial intelligence (AI) and its functions of “imitating human . These technological advancements have also infiltrated the way we work. All of this is due to artificial intelligence (AI) and its functions of “imitating human intelligence.”

AI technology is now being utilized by many companies in areas such as human resources (HR) for employee recruitment processes. Recruiting qualified candidates for an organization is a daunting task for HR managers, and, if it is a large company, there can be hundreds of job applications that recruiters for the organization must review as quickly as possible. AI in this setting can be used to preliminarily screen candidates and assist in the job matching processes, thus increasing efficiency in the time it takes to review and hire new candidates. This process is completed by using metrics and algorithms and has the ability to make most of (if not the entire) recruiting and hiring process easier for these HR managers. Unfortunately, using AI in this setting is not without complications. Using AI and algorithms can have hiring biases, and the progression of using these algorithms can result in disqualifying otherwise qualified candidates and potential claims of discrimination on the basis of race, age, disability, and gender. This can happen due to technology’s “reliance on unconsciously prejudiced selection patterns like language and demography.” With no current regulatory scheme in place, AI in the private sector cannot be restricted to protect against the issues described above. So, what is forthcoming to aid this?

Issues such as hiring and recruiting discrimination that stem from of the use of AI are now starting to become recognized not only nationally, but globally. In 2021, the European Union proposed its “AI Regulation” in order to categorize sectors that use AI by risk and layout guidelines/rules for each. In the “high risk” category lies the focus of this article—employment. Section 73 article 36 of the proposed Regulation illustrates how AI is used in the employment setting. A problematic example under this category would be various sorting software for recruitment procedures. If the United States were to adopt the European regulatory scheme, modifications would have to be made to be compliant, meaning a total restructure of what the country currently has in place. Due to the potential biases and discrimination that might be baked in the AI programming, it seems that the adoption of the EU proposed Regulation addressing the use of AI in HR for hiring and recruiting practices could also benefit the United States.

The Evolution of Artificial Intelligence

The term “AI” was first coined by John McCarthy, a Stanford professor of computer science, in 1955. In 1956, McCarthy invited other researchers in a variety of fields, such as language simulation and neural networks, to attend the Dartmouth Summer Research Project on AI to discuss what would later become the field of AI. In brief, “Artificial Intelligence is the science and engineering of making intelligent computing machines.” Even in that decade, many members in academia were aware of the increasing rate computer technology was producing and advancing.

There are several dictionary definitions for AI, which are similar but slightly different, depending on the computing goals. The Oxford English Dictionary defines it as “the capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this.” Merriam-Webster defines it as “(1) a branch of computer science dealing with the simulation of intelligent behavior in computers[;] (2) the capability of a machine to imitate intelligent human behavior.” Encyclopedia Britannica defines it as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” The definitions shift normally into three objectives: (1) creating systems to think like humans; (2) producing working systems without figuring out how human reasoning works; and (3) using human reasoning as a model but not strict adaptation.

“Deep learning” is the most recent and transformative aspect of AI. Utilizing neutral networks, the algorithms are modeled after the human brain and adapt their learning from the large amount of data they receive from the coders (much of the problems below are because of coders). This is comparable to how we as humans think and learn from previous experiences and adapting from such. The deep learning algorithms perform a task repeatedly and then adjust to improve the outcome from each task. The computer is training itself to process and learn, as well as establishing patterns and connections from otherwise unstructured data.

In the past, sectors that were human-based, such as employment, healthcare, and transportation, have now been automated with the help of AI, and specifically because of deep learning. Deep learning continues to excel in the use of virtual assistants, such as Alexa or Siri. These types of technology help to understand one’s speech and language when they interacted with human input. Translation is another way deep learning is used; these algorithms can automatically translate sentences between different languages instantly without lag. Chatbots, image colorization, facial recognition, and personalized shopping/entertainment are other major areas that utilize deep learning algorithms.

Deep Learning in the Recruitment Process

Nowadays deep learning is often utilized in recruitment and hiring process. The recruitment process is one of the most important factors within many organizations, and these days many organizations and companies are using AI for recruitment of new employees. With external applicants, this is the initial screening process where algorithms are likely to occur with the use of AI. In most companies, there are typically four stages in the recruiting/hiring process: (1) sourcing—attracting and curating candidates, (2) screening, (3) interviewing, and (4) selection.

AI is utilized here to review resumes and cycle through them to assist in the initial screening process, which in turn adds efficiency to the complete hiring and onboarding process. A candidate would submit a resume, and then an algorithm evaluates it and produces a score indicating their quality or fit for the job. The evaluation via the algorithm often uses specific keywords in reviewing the person’s resume. Deep learning is a factor in this stage because the rules that dictate the keyword merit score may not have been written by the human initially. This means that the algorithm had the potential to develop over time without human input.

Deep learning’s role in this context is to gather large amounts of data in the applicant pool, then sift through the data using complicated algorithms, and eventually, over time, develop the human-like response to identify and recruit optimal talent. There are well-known hiring companies that incorporate AI in their products, which are used by outside companies’ software systems for recruiting. For example, HireVue, an interview technology vendor, is unique in the fact that it, itself, is an assessment and video interview tool that again utilizes deep learning AI in order to make “better predictions, better decisions, better hires.” HireVue also strives to eliminate unconscious bias that exists when humans are conducting the recruiting process by combining interviews and predictive analytics, as well as using games to assess applicant competencies. However, although the ultimate goal is to be inclusive and find the right candidate, “unconscious bias” can still make its way into the review process. It is important to note that because many companies use HireVue in their recruiting, HireVue’s legal exposures could be increased as a result.

Potential Legal Ramifications of Algorithm AI Malfunctions and Bias in employment recruitment

Discrimination of race and gender can take place with the use of current AI systems in recruiting. There is a process called “algorithmic bias,” which is the notion that the humans who design the algorithms used in the software companies mentioned above can build their own unconscious biases into the very algorithms that were created to eliminate them. Attorneys Michael Chichester and Jaclyn Giffen wrote about unconscious bias in a company’s resume data, illustrated in the way that employees in a certain zip code were more successful and, therefore, the algorithm began to focus on candidates from that one zip code.  At first glance, this does not seem very troubling. However, at a deeper level, this zip code is 98% Caucasian and creates a disparate impact on non-Caucasian applicants. On its face, it is neutral because it is just a zip code, but there can be a more harmful effect upon further evaluation in the process. What is happening here is something that HR managers and practitioners need to be cognizant about—the problems of facially neutral algorithms.

Antidiscrimination laws within the United States generally prohibit two kinds of discrimination: disparate treatment and disparate impact. Disparate treatment prohibits treating people differently due to being a member of a protected class, whereas disparate impact on members of a protected class could be adopting a policy against hiring people inadvertently in a discriminatory way, such as excluding individuals with a criminal record. Note that criminal records typically include the accused’s race. Returning to the notion that an algorithm can be facially neutral, if the algorithm used is disqualifying those with a criminal record and not establishing appropriate safeguards, it may exclude racial and ethnic minorities in a discriminatory manner due to factors such as disparities in criminal enforcement. Criteria such as educational attainment, prior work history, and even salary history can seem neutral, but often these objective criteria are also products of discrimination and correlate with protected characteristics such as how “white applicants are more likely to have graduated from college than black or Hispanic applicants.” These are prime examples of objective criteria being programmed into an algorithm in an unconscious discriminatory manner.

Another discriminaton consideration when using AI in employment recruiting is when algorithms malfunction. “Faulty” algorithms in the use of facial-recognition software could lead to unintentional discrimination. A hiring software occassionally may not be able to pick up on skin tone or gender, causing it to misidentify people of color. Another issue is if an applicant has some type of disability that impacts how they present on a video; in that situation, an algorithm could potentially result in the person being excluded. Additionally, suppose there are word association elements in an algorithm that can lead to results creating a gender and age bias. If hiring software weighs the number of years with work experience, which can correlate to age, then results can be presented that block out an entire demographic from being considered, raising potential age discrimination claims. Referring back to facial recognition, enunciation and facial expressions are certain characteristics that potentially can be a data collection point for software.

How do companies fix these issues, and what is being done to prevent potential discrimination with such technology? The current status in this realm shows that U.S. law is way behind that of other countries, as there is no real and effective legal or regulatory framework for AI usage in our country. Further, it does not look like Congress is planning on adopting any AI regulations anytime soon. The problems discussed above will only get bigger as AI technology becomes more powerful and has been deployed across more companies and other areas of commerce.

The EU and AI Regulation

The United States could learn from what other countries and jurisdictions are doing to address these high-risk technological issues discussed above within their own governance framework. The European Union has become the leading jurisdiction in the legal framework when it comes to creating and proposing an AI regulatory framework and, therefore, could be a primer for the U.S. to follow.

The EU’s Proposed AI Regulation is the first-ever attempt to enact a horizontal regulation, meaning regulation of all sectors and applications of AI, by any major countries in the world. The purpose of this Regulation is to harmonize rules on AI for EU countries. This came about from the responses to numerous requests from members of the European Parliament and Council calling for “a well-functioning internal market for AI systems where both benefits and risks of AI are adequately addressed at Union level.” AI is a fast-growing technology that can bring social and economic benefits, but as discussed, its benefits can also bring risks for negative consequences for society and individuals.

The initial push for this type of regulation began in 2017, when the European Council called for a “sense of urgency to address emerging trends including issues such as AI.” The “2019 Conclusions on the Coordinated Plan on the development and use of AI Made in Europe” moved the initiative forward to highlight ensuring citizens’ rights are protected from the growing use of AI. In 2020, the most recent “Conclusions” further called for addressing the degree of bias, complexity, and unpredictability that comes with certain uses of autonomous behavior with AI systems, i.e., recruiting software.

The Commission has proposed the following objectives for the regulatory framework:

  1. ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
  2. ensure legal certainty to facilitate investment and innovation in AI;
  3. enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; and
  4. facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.

To achieve their objectives, the proposal presents a horizontal regulatory approach to AI, with the goal of being minimally invasive in hindering technological development or disproportionately increasing the cost of placing AI solutions on the market. In other words, the Commission has set out core rules for development, trade, and use of AI products within the EU applying to all industries.

EU AI proposed Regulation product Safety Framework

The EU AI Regulation contains a product safety framework that is based around four categories that are ranked in a pyramid by risk of criticality:

  1. “unacceptable risks,” which are banned;
  2. “high risks”;
  3. “limited risk,” systems with specific transparency obligations; and
  4. “minimal risk.”

Unacceptable risk AI systems are those that present clear threats to the safety and rights of citizens. High-risk AI systems include the technology used in critical infrastructures, educational or vocational training, safety components of products, employment-sorting software for recruitment procedures (emphasis added), essential private and public services, law enforcement that may interfere with people’s fundamental rights, migration and border control management, and administration of justice and democratic processes. Limited-risk AI systems refer to those with specific transparency obligations, meaning that when they are used, users should be aware that they are interacting with a machine; they can then make the informed decision to continue or not. Minimal or no-risk AI systems allow free use and typically include video games or spam filters utilizing said systems.

If a company has a categorized high-risk AI system, then what happens? These systems will be subject to strict obligations before being used in a marketable setting. This includes:

  • adequate risk assessment and mitigation systems;
  • high quality of the datasets feeding the system to minimize risk and discriminatory outcomes;
  • logging of activity to ensure traceability of results;
  • detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • clear and adequate information to the user;
  • appropriate human oversight measures to minimize risk; and
  • a high level of robustness, security, and accuracy.

To break down the process even further for high-risk systems, the EU has provided a four-step process of the procedures. Step one is that a high-risk system is developed. Step two is where the system undergoes the conformity assessment, which is conducted with predictable and clear obligations by the providers of the technology, described above, and ensures that the system complies with the present AI requirements. Step three is the registration of stand-alone AI systems in a EU database, ensuring that follow-up and maintenance of data are in place. Step four is where a declaration of conformity needs to be signed, as well as the system bearing the “CE” marking to signify that the product meets high safety, health, and environmental protection standards. If these steps are followed, the system then can be placed on the market. However, if there are substantial changes in the system’s life cycle, then the process will need to return to step two.

By returning to step two after significant changes, the system can continue to be risk assessed and the threat to the public stays minimal. This also provides what the EU calls “future-proof legislation,” which ensures that the AI applications remain trustworthy even after being placed on the market. Furthermore, this goes back to the horizontal regulatory approach by formulating such AI regulations—to not hinder the inevitable development of technology. However, high-risk systems, like transport infrastructures or robotic surgery, even after obtaining market approval, must have authorities on Union and Member State level that will have responsibility for market surveillance, end users that will ensure monitoring and oversight, and providers that have post-market monitoring systems in place.

Another factor to consider with high-risk systems is that the definition of this area is not completely concrete and will ultimately be up to the European Court of Justice to interpret if the system is high-risk and what can qualify as high-risk. Although the AI proposal has classification rules for companies to evaluate, they are also not concrete. This can be a hurdle for those wanting to adhere to the requirements set forth when the proposed Regulation is out of its transition period and in full application.

enforcement of the eu’s proposed AI Regulation

The proposal of the Regulation requires the installation of an enforcement body at the Union level that will be known as the European AI Board (EAIB), which will be chaired by the Commission and will have one representative from each national supervisory authority. National supervisors will assist in this process with the EAIB at the member state level. Note again that the proposed rules will be enforced by building on already-implemented structures with a cooperation mechanism. With the new rules intact, the AI systems and services still must comply with sector-specific regulations as well. An example of this is certain technologies; for example, medical devices and in vitro diagnostics will need to conform to the proposed AI regulations, along with the sector regulations set forth in the Machinery Directive and the Regulations (MDR) and the EU General Data Protection Regulation (GDPR). “Fines for violation of the rules can be up to 6% Global turnover, or 30 million Euros for private entities.” The fines add incentive for compliance. While the fines may be used to assist in the funding of the Regulation, the Commission also plans to invest €1 billion per year and mobilize private sector investments.

Status of the EU’s Proposed AI Regulation

The proposal of the Regulation was in April 2021 by the European Commission. The Regulation entered its transitional period in the latter half of 2022, with Council and Parliament cooperative approval. During the transitional stage, the structures in place were operational, meaning the act could begin and start to take effect. Parliament is scheduled to vote on the proposed act by the end of March 2023. However, the latter part of 2024 is when the Regulation is predicted to be applicable to operators with standards in place and confirmatory assessments conducted.

Impact of EU AI Regulations in the United States

The proposed EU AI Regulation has outlined four risk categories of AI and regulations per category. The key category on which this article is focused is the high-risk category and the steps that a company must go through if it wants its AI system on the market. Some employment practices, in particular sorting software in the recruiting process, can pose serious discrimination risks and legal consequences. What can be “facially” neutral in an algorithm can have detrimental effects to a qualified applicant, which, in this respect, is who the EU Regulation is aiming to protect. Even now, all HR managers should be aware and act when any disparate treatment or impact potentially occurs. Also, companies that want their AI systems to be in the European market in time must be in accordance with the Regulation because when the Regulation is in full force, it will impact all EU citizens and compliance will be required if outside companies plan to utilize the EU workforce.

The United States is already following some types of legislation originally from the EU. One major piece of legislation is the GDPR. The GDPR protects privacy data of all EU citizens, and now states in the United States are passing their own form of similar legislation. Furthermore, there is a strong commerce partnership between the United States and Europe, and certain legislation can directly impact each of the other partner states more. Based on this information, the United States likely needs to enacton its own or adopt similar legislation to the EU AI Regulation. The AI Regulation, once in full legislation, will likely mirror the effect that the GDPR has. However, given the potential problems using AI in many sectors, even without the European legislation, it will benefit the United States by eliminatingtechnological risks that can be detrimental to its citizens and, therefore, should be adopted.

    The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

    Lillian Haley


    Lillian Haley, JD, MSHRM is a 2023 graduate of Arizona State University Sandra Day O’Connor College of Law and is an incoming associate claims counsel at Markel.