chevron-down Created with Sketch Beta.

Public Contract Law Journal

Public Contract Law Journal Vol. 50, No. 1

Bias in, Bias out: Why Legislation Placing Requirements on the Procurement of Commercialized Facial Recognition Technology Must Be Passed to Protect People of Color

Rachel Sonia Fleischer

Summary

  • Discusses discriminatory biases by commercial facial recognition technology procured by law enforcement agencies
  • Argues that Congress must amend the Federal Acquisition Regulation (FAR) to place requirements on procurement of commercial facial recognition technology to protect People of Color from discriminatory biases
  • Proposes language for the proposed legislation
Bias in, Bias out: Why Legislation Placing Requirements on the Procurement of Commercialized Facial Recognition Technology Must Be Passed to Protect People of Color
MACIEJ NOSKOWSKI via Getty Images

Jump to:

Abstract

Facial recognition technology is increasingly ever-present in today’s society, shaping and redefining integral aspects of human life. While this ubiquitous technology was created to be objective and neutral in its application, it is not immune to discriminatory biases. These biases have led to a highly disturbing situation, where, while being used disproportionately on People of Color, facial recognition technology is also disproportionately misidentifying these individuals as criminals. Meanwhile, commercial facial recognition technology continues to be procured by law enforcement agencies for policing and intelligence purposes.

This Note argues that Congress must pass legislation amending the Federal Acquisition Regulation and place requirements on the procurement of commercial facial recognition technology in order to protect People of Color. This Note also proposes language for the legislation. Ultimately, the solution proposed by this Note is vital to help mitigate the disparate impact that the use of biased facial recognition technology will have on People of Color.

I. Introduction

On the morning of April 25, 2019, Brown University student Amara K. Majeed awoke to death threats. Majeed’s photo had been associated with the name of a suspected terrorist, tied to an attack in Sri Lanka that killed more than 250 people. Because of an error in the facial recognition software used to investigate the attack, Majeed’s photo was connected to the suspected terrorist’s name, ultimately putting both Majeed and her family in danger for a crime she never committed.

Yet Majeed is not alone. Nearly half of Americans’ faces are included in commercial facial recognition technology databases, which is worrisome considering the algorithms used to program this technology are discriminatory on the basis of race and sex. In fact, across a majority of commercial facial recognition algorithms, Asian, Black, and Indigenous Peoples’ faces are falsely matched at a higher rate than those who are white. Researchers have also found that commercial facial recognition algorithms have higher error rates for women—specifically darker-skinned women—when compared to light-skinned men.

Although one would assume the existence of flawed algorithms at the very least would decrease the speed at which facial recognition technology is integrated into society, quite the opposite has occurred. Facial recognition technology has permeated almost all aspects of life in both the private and public sectors. In the private sector, facial recognition technology is used to unlock iPhones, allow tenants to access their buildings, and even identify patients in hospitals. At the state level, facial recognition technology is used by police departments to identify protestors, investigate, and surveil communities. Likewise at the federal level, agencies such as the Drug Enforcement Administration (DEA) and the Federal Bureau of Investigation (FBI) have purchased commercial facial recognition technology for their own investigative, identification, and surveillance purposes. And this is only the beginning. Recent reports indicate that in response to the COVID-19 pandemic, the Centers for Disease Control and Prevention and the White House are considering using commercial facial recognition technology to identify individuals who have come into contact with those who have tested positive for COVID-19.

People of Color will be disproportionally affected by the ubiquity of facial recognition technology and its reliance on biased algorithms. Biased algorithms will make People of Color more vulnerable to being misidentified as criminals compared to those who are white. In addition to algorithmic biases, the implicit biases of those operating the facial recognition systems will affect how and to what magnitude facial recognition technology is used on People of Color.

In its current form, commercial facial recognition technology should not be used. The continued use of commercial facial recognition technology that is entrenched with algorithmic biases and affected by implicit biases will have a disparate impact on People of Color. Legislation placing requirements on the procurement of commercial facial recognition technology is necessary to protect individuals’ civil rights and liberties.

This Note will explain why any proposed legislation regulating facial recognition technology must contain provisions amending the Federal Acquisition Regulation (FAR) and specifically propose language for what the amendments should be. Part II will provide an overview of how facial recognition technology operates, how algorithmic and implicit biases are created, and how the current procurement of commercial facial recognition technology by the federal government works. Part III will discuss how People of Color are disparately impacted by facial recognition technology entrenched with algorithmic biases and affected by implicit biases. Part IV will provide proposed language that legislation amending the FAR placing requirements on the procurement of commercial facial recognition technology should include.

II. An Overview of Facial Recognition Technology

From its inception, facial recognition technology has suffered from algorithmic bias. For example, in 1963 scientist Woody Bledsoe and his research partner began creating the first “facial recognition machine” with support from the Central Intelligence Agency; they did so by programming the machine using a dataset of 400 images of exclusively white men varying in age.

Before discussing the impact that algorithmic and implicit biases in commercial facial recognition technology will have on People of Color if the federal government continues to procure it, it is critical to understand how this technology works, how algorithmic and implicit biases play a role, and how this technology is procured by the federal government. This Section will first begin with an overview of how facial recognition technology works. It will then explain what algorithmic and implicit biases are and how they both affect facial recognition technology. Next, this Section will explore how the federal government has been procuring commercial facial recognition technology up to this point. Finally, this Section will provide an overview of proposed legislation introduced in both the House of Representatives and the Senate regulating the use of facial recognition technology.

A. The Inner Workings of Facial Recognition Technology

As a form of narrow artificial intelligence and biometric technology, facial recognition technology uses machine learning algorithms. In general, these algorithms are designed to match an image of a face with an image of a face in a dataset to identify and/or verify a person’s identity. An algorithm analyzes an image of a face and extracts facial measurements from the image. Measurements extracted include the distance between the eyes, the width of the nose, and the length of the jawline. Once these measurements are calculated, the algorithm translates the measurements into a unique code. The code is then compared to codes already generated for images located in a dataset to find a match. Once a match is generated, the algorithm calculates a match score. This match score quantifies the similarities between the two images: the higher the match score, the higher the probability that the original image and the image in the dataset are the same individual. At this stage a specific threshold is chosen. If the match score is above the threshold then it is considered a match, while a score under the threshold is not. To evaluate the performance of facial recognition technology, an error rate is calculated by comparing the match rates to non-match rates. Because error rates are calculated by comparing match rates to non–match rates, the accuracy of facial recognition technology is dependent on achieving a high percentage of match rates and a low percentage of non–match rates. False positives occur when the match rate is above the match threshold, but the two images are not of the same individual.

B. Perpetuating Discrimination Through Algorithmic and Implicit Biases

Although these algorithms were created to match individuals equally across the board, the biases within the algorithms prevent that reality from being realized. Further, the implicit biases of individuals using the facial recognition technology influences who this technology is used on. The following sub-sections will examine what algorithmic and implicit biases are and address how they both affect facial recognition technology.

1. Algorithmic Biases

Algorithms allow for machine learning to occur. Just like humans, algorithms used for facial recognition technology can only match images based on the information available. Therefore, algorithmic biases occur when the datasets used to program facial recognition algorithms do not contain a plethora of diverse images. If the images input into the dataset are not diverse, then the algorithms will not be able to match all individuals equally. As a result, facial recognition technology can disproportionately misidentify individuals who are not properly represented in the dataset.

Unfortunately, this lack of representation in datasets has created deeply rooted biases in facial recognition algorithms developed by companies and the federal government. For example, in December of 2019, the National Institute of Standards and Technology (NIST) conducted a study on the accuracy of commercial facial recognition algorithms’ ability to identify individuals of different sexes, ages, and races. Of the algorithms created in the U.S., NIST found high false positives for individuals who are Asian, Black, and Indigenous compared to those who are white. In addition to the companies tested by NIST, other researchers have found similar biases in algorithms created by IBM, Microsoft, and Face++. IBM and Microsoft had the lowest error rate on lighter-skinned males and all three companies had the highest error rate for darker-skinned females. For Microsoft, almost ninety-four percent of the faces misgendered were individuals who were considered darker individuals. Additionally, Amazon’s facial recognition technology program known as “Rekognition” tends to falsely identify People of Color as criminals. In 2019, Amazon’s Rekognition falsely identified twenty-eight NFL players as criminals, thirteen of whom were Persons of Color. The American Civil Liberties Union also found that when they ran photos of members of Congress through Amazon’s Rekognition program, the program falsely matched twenty-eight members of Congress, with approximately forty percent of those falsely matched identifying as Persons of Color.

2. Implicit Biases

Facial recognition technology is not only affected by algorithmic biases but also by the implicit biases of the users as well. Implicit biases are subconscious beliefs and stereotypes that can affect a person’s actions and shape their understanding of the world. Everyone has implicit biases and these biases do not always have to be in line with the explicit beliefs or opinions one may outwardly support. For example, in a study conducted with more than 20 million people on implicit bias, more than eighty percent of individuals had a negative implicit bias against older individuals and about seventy-five percent of those identifying as white and Asian had a positive implicit bias toward white individuals and a negative implicit bias against Black individuals. Generally, an individual’s implicit biases tend to favor their ingroup, but biases against one’s ingroup can also exist. Implicit biases can be changed and individuals can unlearn biases by engaging in debiasing techniques. Awareness of one’s own implicit biases can ultimately help reduce the effects of racial prejudices.

Because implicit biases affect everyone, these biases influence how agencies procuring this technology use this technology in communities. As such, implicit biases may fuel the use of inaccurate facial recognition technology disproportionately on People of Color, putting these individuals at a greater risk of the repercussions that can occur if falsely matched by this technology.

C. Procurement of Facial Recognition Technology by the Federal Government

The purchase of commercial facial recognition technology is likely the result of agencies complying with the Federal Acquisition Streamlining Act of 1994, which codified the federal government’s preference for procuring commercial items and engaging in commercial practices. Because of this statute, the government tends to rely on already-created goods and services in the marketplace rather than having goods and services created exclusively for the government. This statutory commitment to procuring commercial items has likely been a factor in the purchase of commercially-developed facial recognition systems by federal law enforcement agencies. For example, the FBI began piloting Amazon’s Rekognition software to assist the agency in sorting through video footage collected during investigations in 2018. Currently, federal law enforcement agencies, like the DEA, use facial recognition technology to help identify criminals captured on surveillance cameras, detect fugitives in a crowd, and even uncover terrorists coming into the country. In October 2017, the Department of Homeland Security issued a solicitation calling for companies to develop a way to use facial recognition technology to create a biometric entry and exit system that could be implemented by U.S. Customs and Border Protection. U.S. Customs and Border Protection has also specified that they seek to run facial-recognition scans on ninety-seven percent of all individuals entering and exiting the U.S. through air travel. Additionally, in early 2020, reports indicated that U.S. Immigration and Customs Enforcement was using Clearview AI’s pilot facial recognition technology program. Further, multiple components within the Department of Justice, such as the criminal intelligence branch of the U.S. Marshals, the U.S. Attorney’s Office for the Southern District of New York, and the Bureau of Alcohol, Tobacco, Firearms, and Explosives, were also reported to have procured facial recognition technology from Clearview AI. Finally, in response to the COVID-19 pandemic, some reports suggest that the Centers for Disease Control and Prevention is considering procuring facial recognition technology to help track the spread of the virus by engaging in public health surveillance.

D. Recently Proposed Legislation Limiting Facial Recognition Technology

Even though commercial facial recognition technology is actively used by federal agencies, no federal laws currently regulate facial recognition technology. However, members of Congress have become concerned with algorithmic bias and the privacy implications of facial recognition technology; many Members have been pushing for legislation to regulate this technology. For example, the House Committee on Oversight and Reform has held three hearings focusing on different aspects of facial recognition technology. The first hearing focused on how facial recognition technology can severely harm an individual’s civil rights and liberties. The second hearing concentrated on the use of facial recognition technology by federal and state governments and the existence of inaccurate algorithms. Finally, the third hearing addressed transparency and the necessity for accurate algorithms in facial recognition technology sold by private companies.

In addition to hearings, Members of Congress have also introduced bills aimed at regulating the use of facial recognition technology. Proposed bills include provisions such as (1) prohibiting private companies from using facial recognition technology to track individuals without the individuals’ affirmative consent; (2) limiting the use of facial recognition technology by federal agencies so they do not engage in ongoing surveillance; (3) prohibiting the use of facial recognition technology without a Federal Court order, and (4) prohibiting federal agencies from using public funds to purchase facial recognition technology.

III. Algorithmic and Implicit Biases Will Cause the Use of Facial Recognition Technology to Have a Disparate Impact on People of Color

In its current form, the procurement of facial recognition technology with algorithmic biases and unaddressed implicit biases by law enforcement agencies will have a disparate impact on People of Color because this software disproportionately misidentifies People of Color as criminals. Requirements defining how agencies, in general, must procure facial recognition technology are necessary to mitigate the damage that algorithmic and implicit biases will have on these individuals. This Section will focus on how the procurement of facial recognition technology without requirements will have a disparate impact on People of Color. First, this Section will discuss how similar types of technology used by law enforcement have already disproportionately affected People of Color. Second, this Section will examine why the trend is apt to continue if law enforcement agencies continue to procure facial recognition technology without requirements addressing algorithmic and implicit biases.

Over time, researchers have found that seemingly neutral technology used by law enforcement such as hair follicle drug tests, automatic license plate readers, and road-side drug tests are not neutral in their creation or their application. As such, People of Color are being disproportionately affected by these forms and applications of technology. There are generally three pathways of bias that can lead to a disparate impact. First, there can be a bias in how the technology directly impacts People of Color. This is important because if the technology is flawed in its creation through algorithmic biases, it will have a disparate impact on People of Color. Second, there can be bias in how one’s implicit biases affect the use and implementation of unbiased technology. By recognizing the effect implicit biases can have, one can more thoroughly understand their disparate impact on People of Color because these biases are leading to the technology being applied disproportionally on these individuals. Third, there can be bias in how the use of flawed technology in conjunction with implicit biases creates a disparate impact on People of Color. The following subsections are examples of disparate impacts occurring on People of Color as a result of the three pathways of bias: (1) flawed technology, (2) implicit biases affecting the use of unbiased technology, and (3) implicit biases affecting the use of flawed technology.

A. Flawed Technology: Hair Follicle Drug Test

Hair follicle drug tests have been used as a workplace drug test since the 1980s. A lab washes and dissolves strands of hair with a solvent into a solution before conducting various tests to identify if there are any traces of drug metabolites. Although these tests have been used for over thirty years, they are not accurate because they tend to elicit false positives. This occurs because the test cannot distinguish between environmental exposure to drugs and ingested drugs. Further, studies have found that hair with higher concentrations of melanin tends to bind to drugs differently than hair with lower concentrations. Therefore, these tests disproportionately affect Persons of Color with a higher concentration of melanin in their hair because those with a higher concentration of melanin may test positive for drug metabolites even if the individual had not ingested drugs. For example, ten police officers brought a case against the City of Boston in 2005 alleging that their use of the hair follicle test was disproportionately affecting Black officers because the officers’ hair was more likely to be susceptible to false positives preventing them from entering the police force In fact, from 1999 to 2006, it was found that four times as many Black officers falsely tested positive compared to white officers.

Although the hair follicle drug test was not created with discriminatory intent, the actual technology itself was having a disparate impact on People of Color in Boston simply because hair with more melanin reacts differently. This is similar to facial recognition technology. Steps must be taken to eliminate algorithmic bias in facial recognition technology so that the disparate impact that would likely occur on People of Color does not follow the same trend as the disparate impact occurring as a result of the hair follicle test.

B. Implicit Biases Affecting the Use of Unbiased Technology: Automatic License Plate Readers

Just as facial recognition technology has become more prevalent, automated license plate readers are another technology used by law enforcement to track suspects. These readers are located on streetlights, highway overpasses, dashboards of police cars, train stations, and even malls. The technology functions by automatically photographing the license plate numbers and recording the location, date, and time of all vehicles that drive past the automated license plate reader. Automated license plate readers are employed in cities across the country and allow for the mass tracking of individuals and can lead to citations or arrests, similar to facial recognition technology.

Unlike facial recognition technology, the use of automated license plate readers suffers not from algorithmic bias, but instead from the implicit biases of the law enforcement personnel engaging with this technology. These implicit biases are causing a disparate impact on People of Color. For example, in a study conducted in Oakland, California, researchers found that license plates were scanned at a higher frequency in Black and Latinx communities compared to areas with a predominately white population. Additionally, in Port Arthur, Texas, the Black community has been disproportionately affected by the use of automated license plate readers, leading to a higher citation and arrest rate compared to the white community. Even though the Black community accounts for only forty percent of the total population of Port Arthur, Black individuals accounted for seventy percent of the arrests that occurred from citations generated from the automated license plate readers. Black individuals also stayed in jail for longer as a result of these citations: of the 1,300 individuals who spent three or more days in jail, approximately seventy-five percent of the individuals were Black.

In these cases, the disparate impact on People of Color is not occurring because the technology is itself flawed, but rather because the implicit biases of law enforcement are influencing the use of the technology. While policies cannot entirely change or prevent fundamental implicit biases of the individuals using this technology, implicit bias training can bring these issues to the forefront of the user’s mind. Therefore, Congress must mandate that the federal employees or contractors using the facial recognition technology must participate in implicit bias training.

C. Implicit Biases Affecting the Use of Flawed Technology: Roadside Drug Tests

Since 1973, roadside drug tests have been used by law enforcement agencies around the country to identify if a driver is under the influence. These tests are supposedly able to detect more than two dozen drugs, including cocaine, heroin, and marijuana. To use these, officers drop the suspected illicit substance into a vial of pink liquid. If the liquid turns blue, then the substance is confirmed to be one of the identifiable illicit drugs. However, the test also turns blue when exposed to a host of other compounds, such as household cleaners, leading to an alarmingly high number of false positives. For example, from 2010 to 2013, thirty-three percent of the tests used in Las Vegas resulted in false positives, while Florida reported a twenty-one percent error rate. While these tests are not reliable and are not even admissible evidence during a trial, many individuals tested have plead guilty to drug possession even though they did not have drugs in their possession because the plea deals were better than the possibility of going to trial.

The combination of inaccurate tests and law enforcement officers disproportionately stopping People of Color has had a disparate impact on these communities. For example, in Houston, sixty percent of individuals who were wrongfully convicted of illegal drug possession as a result of these roadside drug tests were Black.

Roadside drug tests are a prime example of how People of Color are disproportionally affected when implicit biases affect the use of flawed technology. Although roadside drug tests and facial recognition technology are not inherently flawed, law enforcement agencies continue to use both despite high false positive rates. As such, the high wrongful conviction rates of Black individuals for drug possession due to the false positive rates of the roadside drug tests may be a predictor for the future of facial recognition technology. And even if facial recognition technology is found to not be admissible during trial it may still be used by law enforcement agencies to obtain plea deals even if the individual—likely a Person of Color—was misidentified. Requirements placed on the procurement of facial recognition technology would help mitigate the damage that implicit biases affecting the use of flawed technology will have on People of Color.

IV. Legislation Passed by Congress Can Result in the Procurement of More Equitable Facial Recognition Technology

The federal government has an obligation to ensure that public funds are not used to perpetuate racial discrimination. In this case, to prevent the procurement of facial recognition technology entrenched with algorithmic biases and affected by implicit biases from having a disparate impact on People of Color, Congress must fulfill its obligation and pass legislation placing requirements on the procurement of facial recognition technology. Such requirements should include the following: limiting the procurement of facial recognition technology to systems that can accurately identify all individuals equally regardless of race, color, sex, age, and national origin; mandating that the company providing the facial recognition technology has an internal oversight system in place to check the accuracy of the algorithms; requiring that the agency procuring the technology must provide implicit bias training to any federal employee or contractor using the facial recognition technology.

This Section will first introduce language for proposed legislation placing requirements on the procurement of facial recognition technology and extrapolate on why each section of the proposed legislation is important for the overall goal of preventing the use of facial recognition technology from having a disparate impact on People of Color. This Section will also address why legislation placing requirements on the procurement of facial recognition technology should be created by Congress rather than by agencies or at the behest of an executive order.

A. Proposed Language for Legislation Regulating the Procurement of Facial Recognition Technology

Because the overall goal for placing requirements on the procurement of facial recognition technology is to prevent the use of this technology from having a disparate impact on People of Color, the policy section of the legislation should promote accuracy across all categories of individuals. This Note proposes the following language:

(b) Policy

The head of each Federal agency shall ensure that the decisions made by the Federal agency regarding the procurement of facial recognition technology are made with the view that the technology can accurately identify all individuals equally, regardless of race, color, sex, age, and national origin.

By specifying that the “technology can accurately identify all individuals equally” a mandate is created to mitigate algorithmic bias because the federal agencies will have to assess and compare the accuracy of the algorithms of each of the companies bidding for the contract. Further, because the language emphasizes the word “equally,” even if the algorithm is only accurate ninety percent of the time, so long as it is accurate ninety percent of the time equally across the specified categories, then the algorithm would comply with this requirement. Even though one would want the algorithms to be 100% accurate, this language ensures that algorithmic bias is mitigated. For example, under this statute, algorithms cannot be accurate for white men 100% of the time and only ninety-seven percent accurate for Black men. Ultimately, this provision will force companies interested in competing for a government contract to reduce any algorithmic bias in their software.

In addition to the overall policy objective that should be met when procuring facial recognition technology, limitations on the procurement of the software and exceptions to those limitations should be included to offer more guidance for agencies. This Note proposes the following limitations and exceptions:

(c) Limitations on the procurement of facial recognition technology

(1) In general

The head of a Federal agency may not procure facial recognition technology if it is clear there are significant algorithmic inaccuracies and the offeror does not have a sufficient internal oversight system in place to consistently check the accuracy of the algorithms, unless the senior procurement executive or Chief Acquisition Officer for the Federal agency, before carrying out the procurement—

(A) conducts market research;

(B) makes a written determination that the procurement of the technology with significant accuracy issues with the algorithms is necessary and justified; and

(C) ensures that steps will be taken to mitigate the accuracy issues and/or internal oversight issues once the contract for the technology is awarded.

(2) Determination that a significant accuracy issue is necessary and justified

(A) In general

A senior procurement executive or Chief Acquisition Officer may determine the procurement of facial recognition technology with significant algorithmic inaccuracies is necessary and justified for the purposes of paragraph (1)(C) if the benefits of the procurement substantially exceed the benefits of each of the possible alternative contracting approaches identified under paragraph (1)(B).

(B) Saving of costs

For purposes of subparagraph (A), saving of costs, regardless of the types of costs, does not constitute a sufficient justification for determining that a significant accuracy issue is necessary and justified.

(C) Notice

Not later than 7 days after making a determination that a procurement of facial recognition technology with significant algorithmic inaccuracies is necessary and justified under subparagraph (A), the senior procurement executive or Chief Acquisition Officer shall publish a notice on a public website that such determination has been made. Along with the publication of the solicitation, the senior procurement executive or Chief Acquisition Officer shall publish a justification for the determination, which shall include the information in subparagraphs (A) through (E) of paragraph (1).

(3) Benefits to be considered

The benefits considered for the purposes of paragraphs (1) and (2) may include—

(A) the usage of the technology;

(B) if the offeror has the abilities to eliminate the significant inaccuracies and/or to create an internal oversight system to check for inaccuracies;

(C) terms and conditions; and

(D) any other benefit.

The first limitation in subsection (c)(1) prohibiting the procurement of facial recognition technology with “significant inaccuracies” is important because significant inaccuracies include the existence of both false positives and false negatives. Therefore, the phrase “significant inaccuracies” accounts for any reason—such as algorithmic bias—as to why the software may be generating false positives and false negatives. While the term “significant” is arguably vague, the flexibility of the term is its strength. “Significant” provides agencies with a standard while also purposefully leaving room for discretion.

The second limitation specified in subsection (c)(1)—prohibiting the procurement of facial recognition technology if the offeror does not have an internal oversight system—is also essential. Internal oversight systems ensure that the algorithms and software as a whole are functioning properly over time. The statute does not specify what kind of internal oversight system must be in place because each offeror may use a different system. Furthermore, by mandating an internal oversight system, the agency will be able to compare the various oversight systems, and weigh factors such as effectiveness, evaluation, and monitoring capabilities when awarding a contract.

The exceptions to the limitations were created to account for situations where the offerors may have significant inaccuracies in their algorithms or have no internal oversight systems but have the capabilities to fix either of those limitations. Because this legislation is intended to prevent the use of facial recognition technology from having a disparate impact on People of Color, the proposed language explicitly forces the agency to meet a difficult standard, namely whether the usage of the offeror’s technology is “necessary and justified.” The agency will do this by engaging in a documented balancing test and meeting a notice requirement. This allows for accountability and transparency in the agency’s decision-making process as to why they would be procuring facial recognition technology that could be perpetuating discrimination. Additionally, when engaging in the balancing test, the proposed legislation specifies that cost is not a sufficient justification for selecting the faulty technology because preventing a disparate impact on People of Color should never be sacrificed for monetary purposes. Moreover, the proposed legislation also includes an exception if the offeror has the abilities to eliminate the significant inaccuracies and/or create an internal oversight system. This would allow for the federal government to contract with an offeror that has the clear capabilities to fix their technology and/or operations if need be.

The proposed policy objective and requirements on the procurement of facial recognition technology may violate the Competition in Contracting Act (CICA) because they would limit competition to only those contractors who have the capabilities to meet these requirements rather than opening the competition to all capable contractors who want to engage in business with the government. However, under this proposal every responsible and interested contractor is afforded the opportunity to submit a proposal for the agency to consider. Rather than prohibiting which contractors can submit a proposal for an agency to consider, the requirements of the proposed statute provide limitations and guidance for how the agency should evaluate the proposals once they receive them.

Finally, to mitigate the implicit biases that affect how facial recognition technology is used on People of Color, this legislation mandates implicit bias training for federal employees or contractors using the technology. The following language can serve as a model:

(d) Implicit Bias Training

(1) In general

The head of each Federal agency shall ensure that any Federal agency which procures or develops facial recognition technology shall conduct implicit bias training for any employee engaged or to be engaged in the use of the procured facial recognition technology including contractors, for the purposes of –

(A) ending discrimination caused by implicit bias in the administration of the facial recognition technology to reduce the effects implicit bias has on Persons of Color and to ensure that all people are treated with dignity and respect.

(2) Eligibility

An implicit bias program implemented pursuant to section (1) shall include all of the following:

(A) Identification of previous or current unconscious biases and misinformation.

(B) Identification of personal, interpersonal, institutional, structural, and cultural barriers to inclusion.

(C) Corrective measures to decrease implicit bias at the interpersonal and institutional levels, including ongoing policies and practices for that purpose.

(D) Information on the effects, including, but not limited to, ongoing personal effects, of historical and contemporary exclusion and oppression of minority communities.

(E) Information about cultural identity across racial or ethnic groups.

(F) Information about communicating more effectively across identities, including racial, ethnic, religious, and gender identities.

(G) Discussion on power dynamics and organizational decision making.

(H) Discussion on inequities within biometric technology, including information on how implicit bias impacts the usage of facial recognition technology causing it to affect communities differently.

(I) Perspectives of diverse, local constituency groups and experts on particular racial, identity, cultural, and provider-community relations issues in the community.

(3) Completion

Upon completion of the initial implicit bias training, any employee engaged or to be engaged in the usage of the procured facial recognition technology including contractors shall complete a refresher course under the implicit bias program every year thereafter, or on a more frequent basis if deemed necessary by the director of the Federal agency, in order to keep current with changing racial, identity, and cultural trends and best practices in decreasing interpersonal and institutional implicit bias.

(4) Report

The director of each Federal agency shall conduct a periodic review of the effectiveness of the implicit bias program established by this Act.

Mandatory implicit bias training is necessary because the procurement of facial recognition technology can also have a disparate impact on People of Color if the technology is used disproportionally on these communities due to the operator’s implicit biases. While the proposed legislation does not specify what program should be used, it does specify the content of the training to allow agencies to decide whether they would like to create an in-house training program or contract the training out. This training will also provide those operating the procured facial recognition technology time to reflect on how their implicit biases can affect the lives of others, and operators will also be required to take refresher courses. Finally, this proposed legislation also promotes accountability and transparency by requiring an evaluation and written report discussing the effectiveness of the program to hopefully begin to quantify how this training is affecting those using the facial recognition technology over time. Ultimately, while implicit biases are difficult to change, the implicit bias training will bring to the attention of those using the facial recognition technology how their biases and actions may be having a disparate impact on People of Color.

B. Legislation Amending the Federal Acquisition Regulation (FAR) Is More Likely to Be Passed, Will Have a Broader Reach Across All Agencies, and Will Likely Stand the Test of Time Compared to Agency Regulations or Change at the Behest of an Executive Order

Legislation placing requirements on the procurement of facial recognition technology is more likely to pass through Congress at this point in time because Members of Congress have already expressed concern about the impact that biases in this technology can have on People of Color and are already working on legislation regulating facial recognition technology. Further, Congress can amend the FAR when the legislation directly addresses the same topic as the procurement amendments. In this case, because legislation passed by Congress would specifically be related to the use of facial recognition technology, Congress would be able to include a sub–section placing requirements on federal procurement of commercial facial recognition technology.

Because this technology is being procured by various agencies, it would be more prudent to have Congress amend the FAR through legislation so that the requirements apply to all agencies to help mitigate a disparate impact from occurring on an agency-by-agency basis. Further, the preservation of these requirements will likely stand the test of time if they have gone through the legislative process rather than created at the behest of an executive order because executive orders can be repealed by future administrations whereas laws initiated by Congress tend to have more durability.

Opponents may argue that placing requirements on the procurement of facial recognition technology may not be realistic because the interests and lobbying power of the companies who own this technology may prevent this legislation from ever passing. However, legislation placing requirements on facial recognition technology has already been supported by major companies. In 2018, the president of Microsoft, Brad Smith, published a letter calling for the federal government to regulate facial recognition technology. In the letter, Smith acknowledged that industries building and using facial recognition technology also have an ethical responsibility to reduce bias in facial recognition technology by partnering with various groups to create more diverse datasets, becoming more transparent about their use of facial recognition technology, slowing down the use of this technology in society, and participating in public policy discussions related to facial recognition technology. In 2020, Microsoft stopped selling facial recognition to police departments announcing, saying “[w]e will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.” Yet Smith is not the only representative from a major company calling for government regulation. In 2019, the founder, chief executive officer, and president of Amazon, Jeff Bezos, also announced support for federal oversight regulating facial recognition technology. In fact, Amazon’s public policy team drafted provisions that the company would like to see in any law regulating facial recognition technology. While Amazon’s proposal may not offer protections that are as strong as the ones that Congress is considering, the announcement of a one-year moratorium on police departments using Rekognition implies that Amazon is not completely opposed to the idea of federal regulation.

Ultimately, legislation amending the FAR and regulating the procurement of facial recognition technology will help protect People of Color from being disproportionately affected in the long term because this policy will have a broader reach across all agencies and will likely stand the test of time, as compared to protections put in place by agency regulations or executive orders. Further, this is a realistic approach because this kind of legislation is supported by members of Congress and major actors in the industry.

V. Conclusion

As Congress considers legislation regulating facial recognition technology, requirements and limitations for how agencies should procure commercial facial recognition technology are vital to mitigate the disparate impact algorithmic and implicit biases can have on People of Color. The solution proposed by this Note would help eliminate algorithmic bias in commercial facial recognition technology because companies wanting to do business with the federal government will have to show that the software can accurately identify all individuals equally regardless of race, color, sex, age, and national origin. In most cases, the federal government would not be able to contract with a company that has significant algorithmic inaccuracies and has no internal oversight system in place to consistently check the accuracy of the algorithms. The solution proposed by this Note would also help to lessen the impact that implicit biases of those using the facial recognition technology could have on People of Color by mandating that implicit bias training must be provided for any federal employee or contractor engaged in the use of commercial facial recognition technology. While these proposals may not eliminate bias altogether, especially implicit bias, the requirements and limitations on the procurement of commercial facial recognition technology are an important first step. Congress must be proactive in protecting People of Color from being disproportionately affected yet again by another government-enforced policy.

    Author