chevron-down Created with Sketch Beta.

GPSolo eReport

GPSolo eReport May 2024

Police Use of AI-Powered Facial Recognition Technology and the Risk of Racial Bias

Kenneth Chike Odiwe

Summary

  • When used by law enforcement in identifying crime suspects, facial recognition technology (FRT) powered by artificial intelligence can reinforce unconscious bias and lead to the arrest of innocent Black people.
  • FRT programs are more prone to error on facial images depicting Black males than facial images depicting white males.
  • Testing, monitoring, and the implementation of enforceable safety measures are imperative to prevent constitutional violations in the use of FRT and AI.
Police Use of AI-Powered Facial Recognition Technology and the Risk of Racial Bias
shironosov via getty images

Jump to:

Artificial intelligence (AI) and the use of computer algorithms play an ever-greater role in modern life. The use of AI has become more significant in decisions made in fields as diverse as health care, employment, education, and the judicial system, including law enforcement.

Law enforcement has begun to use facial recognition technology (FRT) to help identify crime suspects. FRT is an AI-powered technology integrating machine learning algorithms that identify facial features and match a face to images of other faces from a database. It is the automated process of comparing two images of faces to determine whether they represent the same person.

Police officers can obtain facial images of crime suspects because of the widespread use of public and private cameras. There are abundant sources for officers to obtain images of suspects before, during, or after a criminal act. After the image is obtained, officers can enter it into an FRT program, which then produces faces from a large database for police to consider for investigation. Officers can then decide which suspects should be considered for questioning or arrest.

Facial Recognition Technology and Unconscious Racial Bias

FRT algorithms and datasets have demonstrated the potential for unconscious racial bias. AI-powered FRT programs used by law enforcement in identifying crime suspects are significantly more prone to error on facial images depicting black males than facial images depicting white males. The bias results from the lack of diversity—in particular, the lack of Black faces—in the datasets used to train the algorithms. This bias can lead to citizens being wrongfully investigated by police along racial lines.

The potential for AI-generated bias in crime suspect identification makes it clear that the technology needs to be implemented with caution. AI can look at patterns and implement any bias behavior of the programmers. It has the capacity to reflect and even exacerbate the worst aspects of biased behaviors.

In June 2020, NPR and other news outlets reported on the arrest of Robert Williams, a black man. Williams is the first man in the United States to be arrested by mistake because of a racially biased FRT program. Law enforcement wrongfully arrested Williams after an FRT program misidentified him from an image captured by a security camera during a theft at a retail store in Michigan. The FRT program mistakenly matched the image to Williams’s photo from his driver’s license. After his arrest and release on bond, the Wayne County prosecutor’s office dropped the charges against Williams due to insufficient evidence. Williams’s case offers a real-life look into the potential consequences of law enforcement’s use of racially biased FRT programs.

Legislative Remedies

Robert Williams’s case has encouraged some state and local governments to take legislative actions that include policy debates, regulations, and moratoriums. However, as of the year 2024, no federal law in the United States directly regulates the use of FRT technology.

More than one-quarter of local and state police forces and half of federal law enforcement agencies use FRT programs. The extensive use of FRT is a significant threat to our constitutional right to be free from unreasonable searches and seizures. Because of the constitutional threat, cities such as San Francisco and Boston have banned or restricted government use of FRT.

In 2022, President Biden’s administration released the Blueprint for an AI Bill of Rights. The bill outlines practices intended to protect constitutional rights. However, the bill is not enforceable. After the AI Bill of Rights, congressional Democrats introduced the Facial Recognition and Biometric Technology Moratorium Act. The bill is an attempt to suspend the use of FRT until lawmakers can implement regulations that balance constitutional concerns and public safety. The AI Bill of Rights and the Moratorium Act are both attempts to protect citizens from the potential constitutional overreaches of FRT. However, the blueprint does not cover law enforcement’s use of AI, and the moratorium only limits the use of automated facial recognition by federal authorities and not local governments.

Changes to Policy and Regulation

FRT will contribute to discriminatory law enforcement practices unless safeguards are put in place for local governments. In the United States, Black people are stopped, searched, arrested, charged, convicted, or wrongfully convicted at a disproportionate rate. As such, Black communities are more vulnerable to enforcement disparities and constitutional violations if their police departments maintain their unregulated use of FRT.

Law enforcement uses FRT to identify stopped or arrested persons in the field, to identify people in video footage, and to identify people in real time from surveillance footage. Although the police make the final investigation decisions, there is a risk associated with the wrongful belief that FRT does not misidentify suspects. As such, there exists the risk that law enforcement will depend on their beliefs concerning the system’s accuracy in their enforcement decisions. There is also the risk that officers will depend on FRT to the extent that they detach themselves from their conduct. Law enforcement’s reliance on FRT can reach a point where officers favor outcomes that match unconscious stereotypes about Black criminality.

Law enforcement should steer clear of an overreliance on FRT. Tech companies and law enforcement must work together to reduce the potential risks of automating and perpetuating unconscious racial bias through reliance on FRT.

Tech companies must consider diversity among their programmers in their effort to create a reliable facial recognition program. In the United States, most programmers are white men. Facial recognition programs are much better at identifying members of the programmer’s race. This is due to the programmer’s unconscious transmittal of their biases into algorithms. Bias comes into play as programmers unconsciously focus on facial features that are familiar to them. Further, the algorithm is then tested mainly on people of their race. The uneven balance in the types of faces in the training dataset for algorithms diminishes their capacity to accurately recognize people of color.

Ironically, because Black people are overrepresented in mugshot databases and other image sources commonly used by law enforcement, FRT is more likely to mark Black faces as criminal. Once again, disproportionate representation carries the risk that the algorithm can lead to the arrest of innocent Black people.

Law enforcement has a duty to scrutinize their methods to prevent the use of FRT from making racial disparities worse and leading to more civil rights violations. After the facial recognition program generates images of potential criminal suspects, it ranks the individuals based on how similar the program believes the images are. Law enforcement officers use their own similarity score criteria. Law enforcement’s use of their own criteria raises the risk of wrongful arrests.

Using FRT Responsibly

Police departments will continue to implement FRT, but they cannot be opaque regarding when and how they use the technology. It is imperative that state and local governments work to minimize racial bias in AI-powered tools through testing, monitoring, and the implementation of enforceable safety measures to prevent constitutional violations and to ensure that algorithmic bias and unconscious systemic racism are not built into the digital future.

    Author