Facial recognition technology (FRT) is a form of artificial intelligence (AI) that has the potential to enhance public safety and can aid in identifying people who committed a crime. AI is a branch of computer science involved in the creation of intelligent systems that can reason, learn, and act with little human interaction.
At its core, FRT has the ability to capture, analyze, and compare facial characteristics with vast databases of images. Both FRT and AI have evolved exponentially over the past 10 years, leading to a sharp increase in the adoption of facial recognition tools. This technology has transformed the landscape of private and public law enforcement investigations. However, recent instances of misuse and overreliance by investigators on FRT highlight concerns about privacy violations, racial biases, and potential for abuses. New caselaw suggests that the application of FRT without proper human oversight and regulations has led to instances of wrongful accusations, arrests, and detentions.
The increasing adoption of FRT and AI necessitates careful examination of their legal and ethical ramifications. These cutting-edge technologies raise complex issues surrounding the use of facial recognition technology in law enforcement, analyzing its impact on individual rights, and exploring the ongoing efforts to strike a balance between security and privacy.
Understanding Facial Recognition Technology
Facial recognition technology is software that uses a person’s facial features and features to verify that person’s identity. Thorin Klosowski, Facial Recognition Is Everywhere. Here’s What We Can Do About It, N.Y. Times: Wirecutter (July 15, 2020). FRT normally relies on proprietary algorithms to attempt to detect, analyze, recognize, and identify an individual from an image.
Detection, the first component, involves FRT software locating a face within a video or image. The process of detection has been aided by the increase in technology and AI-assisted facial detection methods. Detection begins with an algorithm that is programmed to learn what a face is. To fine-tune detection methods, the programmer trains the algorithm with images of faces.
Analysis is the second component of FRT. For FRT, analysis is when the system creates a geometric map of a face. The mapping typically measures such facial features as the distance between the mouth and nose, the distance between eyes, and the curvature of the chin. These facial features are then converted into a string of points or numbers, referred to as a “faceprint.” Id.
Recognition involves attempts by the program to confirm a person’s identity from an image. A common use of facial recognition is smartphone verification. Microsoft, Google, and Apple have implemented facial recognition as a security feature on devices through Windows Hello, Android’s Trusted Face, and FaceID. Nick Statt, Microsoft’s Windows Hello Will Make Your Face, Finger or Iris the New Sign-in, CNET (Mar. 17, 2015); Cameron Summerson, Why Face ID Is Much More Secure Than Android’s Face Unlock, How-To Geek (Jan. 30, 2019); Sean Hollister, iPhone X: How Face ID Works, CNET (Sept. 20, 2017). At its most simplistic, through recognition an FRT systems seeks to answer the question “who is this person?”
Identification, the last component, is when the FRT system accesses a database of images to attempt to identify a person by cross-referencing a person based on the database of images. The databases consist of myriad sources, including social media, mug shots, and other accessible image repositories.
A History of Facial Recognition Technology
Developing FRT has been a long and evolving process. Significant advancements in FRT have been made over the past 60 years. FRT began in 1964 when American researchers Woodrow Bledsoe et al. first studied facial recognition computer programming. Facial Recognition History, Thales. The researchers classified photographs of faces digitized by hand and the facial features were stored in a database. Kevin Bonson et al., How Facial Recognition Technology Works, HowStuffWorks (Apr. 17, 2024). Their work focused on creating a semi-automatic method that would allow operators to enter 20 specific measurements, such as the size of the mouth or eyes.
Facial recognition continued to improve throughout the 1970s, 1980s, and 1990s. A breakthrough came in 1991 when Alex Pentland and Matthew Turk from MIT presented the first successful example of facial recognition technology, known as Eigenfaces. Matthew A. Turk & Alex P. Pentland, Face Recognition Using Eigenfaces, Proc. 1991 IEEE Comput. Soc’y Conf. on Comput. Vision & Pattern Recognition 586. This method used statistical principal component analysis (PCA) to identify patterns in faces, paving the way for future advancements. In 1997, Fisherfaces was announced. The Fisherfaces method was shown to have lower error rates than the Eigenface technique. P.N. Belhumeur et al., Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection, 19 IEEE Transactions on Pattern Analysis & Mach. Intel. 711 (1997).
In 1998, the Defense Advanced Research Projects Agency launched the Face Recognition Technology (FERET) program. The FERET program was designed to create a large database of facial images collected independently from algorithm developers. Face Recognition Technology (FERET), NIST (updated July 13, 2017). The FERET database consisted of 14,126 images that included 1199 individuals and 365 duplicate sets of individuals. Ultimately, this effort helped to encourage industry and academia to focus on developing more accurate facial recognition technology.
Fast forward to 2005, when the Face Recognition Grand Challenge (FRGC) competition was launched, aimed at developing face recognition technology that could support existing initiatives. The FRGC encouraged researchers to push the boundaries of what was possible with facial recognition technology.
The next major breakthrough came in 2006 with the advent of deep learning, a machine learning method based on artificial neural networks. This allowed computers to select and compare points automatically, and even learn from more images being provided. Suddenly, there was a sharp acceleration in terms of advancements in FRT.
In 2014, Facebook revealed its internal algorithm, Deepface, which claimed to be able to recognize faces with an accuracy rate of near 97 percent, rivaling the human eye. Will Oremus, Facebook’s New Face-Recognition Software Is Scary Good, Slate (Mar. 18, 2014). This marked a significant milestone in the development of facial recognition technology and highlighted the incredible progress made over the years.
FRT: Widespread Use and Growing Concerns
Based in large part on these advancements, FRTbegan to be deployed widely. In 2016, an estimated 50 percent of Americans were stored in law enforcement facial recognition databases, which are also being used in at least 10 European Union countries. Ctr. on Priv. & Tech., Half of All American Adults Are in a Police Face Recognition Database, New ReportFinds, Geo. Law (Oct. 18, 2016).
In 2017, President Donald Trump signed an executive order speeding up the use of FRT at US borders. Davey Alba, The US Government Will Be Scanning Your Face at 20 Top Airports, Documents Show, BuzzFeed News (Mar. 11, 2019). In 2018, the Department of Homeland Security reported that US Customs planned to scan 97% of passengers by 2024 for its biometric exit program. US Dep’t of Homeland Sec., Fiscal Year 2018 Entry/Exit Overstay Report (2018). In a June 2021 report from the US Government Accountability Office to Congress, the GAO found that 20 out of 42 federal agencies that employ law enforcement officers own systems “with facial recognition technology or us[e] systems owned by other entities such as other federal, state, local, and non-governmental entities.” US Gov’t Accountability Off., GAO-21-518, Facial Recognition Technology: Federal Law Enforcement Agencies Should Better Assess Privacy and Other Risks (June 2021).
In 2019, the International Association of Chiefs of Police (IACP) published “Guiding Principles for Law Enforcement’s Use of Facial Recognition Technology.” The IACP encourages a cautious approach to FRT results, warning that results from FRT are “NOT a positive identification of an individual. In the law enforcement investigations context, facial recognition is a tool that potentially develops an investigative lead.” The IACP recommends five principles. First, it recommends the creation of FRT usage policies consistent with applicable laws. Second, the user agency must ensure that policies protect the constitutional rights of individuals and prohibit the use of FRT that “would violate the individual’s rights under the First and Fourth Amendments.” Third, FRT results of an individual should be “ranked based on computational analysis of the similarity of features.” Fourth, any information and images used in a list of FRT candidates are only for investigative lead generation. These images and information are not to be used as positive identification or used solely for the basis for any law enforcement action. Fifth, law enforcement should require training on the use of FRT before its use is authorized.
Entertainment venues also began to utilize FRT as a security measure. Kevin Draper, Madison Square Garden Has Used Face-Scanning Technology on Customers, N.Y. Times (Mar. 13, 2018). In 2018, pop star Taylor Swift held a concert at the Rose Bowl in Pasadena, California. The venue was monitored by the FRT system. Stefan Etienne, Taylor Swift Tracked Stalkers with Facial Recognition Tech at Her Concert, The Verge (Dec. 12, 2018). The FRT system was built into a kiosk that displayed portions of her rehearsals. The kiosks not only displayed this content but also had a facial recognition camera that secretly recorded the viewers’ faces. The images were transferred to a “command post” in Nashville, Tennessee, where the facial images were “cross-referenced with a database of hundreds of the pop star’s known stalkers.” Steve Knopper, Get Ready for Your Close-Up, RollingStone (Dec. 10, 2018).
Facial recognition also is being implemented in health care. In one use case, FRT is being used to identify patients and diagnose genetic conditions. Nahid Widaatalla, AI and Facial Recognition Dive into Global Health Care, Think Global Health (May 6, 2024). Medical facilities also have utilized FRT to help identify patients, match medical records, and secure access to certain locations within a health care facility. Research has shown that FRT can help detect certain forms of rare genetic diseases by analyzing facial features and patterns quickly. Additionally, FRT has been used to screen patients for pain.
Additionally, some public schools in the US are using FRT to track people banned from campus or to record class attendance. Arianna Prothero, Does Facial Recognition Technology Make Schools Safer? What Educators Neet to Know, EducationWeek (Oct. 13, 2023). Many schools across the United States have spent COVID recovery funds to purchase security equipment and hardware, including various FRT solutions. Implementation of AI-powered FRT and weapons recognition software are being considered by schools to use on campuses to promote safety and prevent school shootings.
Forms of facial recognition now reside everywhere. FRT is active on our computers, phones, airports, hospitals, schools, and concert venues. Despite this expansive reach, bias has been shown to be a shortcoming of many FRT models.
The Problems of Bias in FRT Systems
The National Institute of Science and Technology conducted an evaluation of 189 FRT software algorithms from 99 developers. NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software, NIST (Dec. 19, 2019). The testing showed that FRT systems resulted in higher rates of false positives for certain demographic groups of people. For the purposes of this study, a false positive indicates that the FRT incorrectly considered images of two different people to be the same person. The research showed higher rates of false positives were reported for Asians, African Americans, and native groups compared to Caucasian groups for US-developed FRT algorithms. In fact, the American Indian demographic had the highest rates of false positives.
Although FRT is being used in some schools, the New York State Education Department was the first to permanently ban the use of FRT in schools. Press Release, N.Y. St. Educ. Dep’t, State Education Department Issues Determination on Biometric Identifying Technology in Schools (Sept. 27, 2023). The State of New York prohibits schools from purchasing or using FRT. In its 2023 Biometric Report concerning FRT, one of the identified concerns for use of FRT at schools was its finding for the potential for higher rates of “false positives for people of color” as well as other demographics tied to age, gender, and sexual identity. N.Y. State Off. of Info. Tech. Serv., Use of Biometric Identifying Technology in Schools (2023).
State Statutes Governing FRT
In 2008, the State of Illinois enacted the Biometric Information Privacy Act (BIPA). This law limits private firms’ ability to collect biometric data without consent. Scans of facial geometry are included as biometric data defined by the Act. Under this Act, notice and a written opt-in consent are required for the gathering of any form of biometric information. Further, this law gives a person a right of action against a party that violates the law—$1,000 per violation if the violation was negligent and $5,000 if the violation was intentional. In 2021, Facebook paid $650 million to settle a BIPA lawsuit. Patrick McKnight, Historic Biometric Privacy Suit Settles for $650 Million, Bus. L. Today (Jan. 28, 2021).
Massachusetts General Laws chapter 6, section 220 establishes the legal framework for conducting facial recognition searches in Massachusetts, including the circumstances under which law enforcement may use FRT. This law requires that law enforcement agencies document its use of FRT by maintaining a copy of any written request for a facial recognition search, the date and time of the request, what database was searched, who conducted the search, the reason for the request, and details pertaining to the specific characteristics of the facial recognition request. This documentation must be provided to the executive office of public safety and security, but the specific details of each search are not public documents. The law has no specific statement on whether the FRT documentation is discoverable in a criminal case.
Virginia Code sections 52-4.5, 15.2-1723.2, and23.1-815.1 set forth authorized uses of FRT by law enforcement in Virginia, and limitations on its use. Under section 52-4.5, law enforcement may use FRT when there is a reasonable suspicion that an individual has committed a crime or when an individual is a victim or witness to a crime. Law enforcement is prohibited from using FRT to track the movements of an identified person in real time, creating a database of images for the purposes of using FRT, or submitting a comparison photo in a commercial image repository, unless done with an authorized use. This section requires the Virginia Department of State Police to establish a State Police Model Facial Recognition Technology Policy.
Washington Revised Code section 43.386.080 also codifies specific restrictions on FRT use by law enforcement. Under this law, law enforcement may only use FRT if it obtains a warrant, exigent circumstances exist, or a court order is obtained to locate or identify a missing or deceased individual. Law enforcement may not target individuals based on protected characteristics (such as race, ethnicity, religion, or political views) and may not use FRT results as the only basis for establishing probable cause in a criminal investigation. Additionally, section 43.386.020 requires both state and local law enforcement agencies to file a notice of intent to use FRT and to produce an accountability report before using the technology.
California cities such as San Francisco, Oakland, and Berkley have banned facial recognition use by government agencies. Kate Conger et al., San Francisco Bans Facial Recognition Technology, N.Y. Times (May 14, 2019); Sarah Ravani, Oakland Bans Use of Facial Recognition Technology, Citing Bias Concerns, S.F. Chron. (July 17, 2019); Tom McKay, Berkeley Becomes Fourth U.S. City to Ban Face Recognition in Unanimous Vote, Gizmodo (Oct. 16, 2019). In 2020, Portland, Oregon, went a step further as it became the first American city to ban the use of FRT by the government, law enforcement, as well as businesses. Rachel Metz, Portland Passes Broadest Facial Recognition Ban in the US, CNN (Sept. 9, 2020).
New Jersey Appellate Court Upholds Defendant’s Right to FRT Discovery
The Appellate Division of the New Jersey Superior Court recently held that a defendant was entitled to discovery related to FRT that was used to identify him as a suspect. State v. Arteaga, 476 N.J. Super. 36 (App. Div. 2023).
On November 29, 2019, an armed robbery took place at a retail shop. A store employee was held at gunpoint, pistol-whipped, and robbed of $8,950.00. The employee described the perpetrator as a “Hispanic male wearing a black skully hat” and holding a black handgun.
The store’s manager, although not present during the robbery, believed she recognized the assailant. She described a person who was in the store, asked about a cell phone case, waited briefly in line, and then left the store without making a purchase. She saw him again outside the store adjusting gloves as he walked back to the store.
The store’s surveillance camera captured the individual’s previous visit and the robbery. Additionally, detectives obtained footage from a nearby property, showing the man walking around the store for about 10 minutes.
A still image of the man was generated from the footage and submitted to the New Jersey Regional Operations Intelligence Center (NJROIC) for facial recognition analysis. Initially, an NJROIC investigator reported no matches but stated he could run the query again with a better image. Instead, the New Jersey investigators sent the raw surveillance footage to the Facial Identification Section of the New York Police Department RealTime Crime Center (NYPD RTCC). An investigator there generated an image from the footage, compared it to the center’s databases, and suggested Mr. Arteaga as a “possible match.” Id. at 43.
Using this possible match, detectives in New Jersey created two different image arrays to show the witnesses. These arrays had five filler images of different people and the image generated by the NYPD RTCC. Upon reviewing these arrays, both the victim and store manager independently identified Mr. Arteaga as the perpetrator.
A grand jury indicted Mr. Arteaga with first-degree robbery, third-degree aggravated assault, fourth-degree aggravated assault, second-degree possession of a weapon for an unlawful purpose, first-degree unlawful possession of a weapon, and second-degree certain persons not to have a weapon.
Mr. Arteaga’s counsel sent to the prosecution a discovery demand for information about the FRT software used to conduct the search and the items involved in the search, pursuant to N.J. Ct. R. 3:13-3 and Brady v. Maryland, 373 U.S. 83 (1963). The defense request included a list of the name of the FRT manufacturer, source code for the FRT algorithm, list of identifying marks used by the system, error rates, original copy of the image submitted to the FRT for analysis (probe image), a copy of the database image that matched the probe image, description of the confidence scores generated by the FRT system, a copy of the candidate list of images, a list of the parameters used by the FRT database, a report produced by a technician who ran the FRT software, and the qualifications of the technician who ran the search query.
The prosecution responded to the discovery demand by providing the defense with a copy of the probe photo and a copy of the candidate list. The defense subsequently filed a motion to compel production of all items listed on its discovery demand. The trial court denied the request for discovery by concluding that the prosecution had no obligation to produce the discovery because the FRT was not within its care, custody, or control. The trial court further held that the requested materials were not Brady material. Mr. Arteaga appealed.
The appellate court reversed the denial of discovery to Mr. Arteaga. It found that a defendant had a due process right to test the reliability of the FRT. “The evidence sought here is directly tied to the defense’s ability to test the reliability of the FRT. As such, it is vital to impeach the witnesses’ identification, challenge the State’s investigation, create reasonable doubt, and demonstrate third-party guilt.” Arteaga, 476 N.J. Super. at 57. In effect, the court recognized the need for transparency of information about the FRT system. It held that the state’s failure to provide information to Mr. Arteaga about the FRT and how it can misidentify suspects deprived him of his due process right to challenge an investigative tool used by law enforcement.
Robert Williams: A Victim of a Flawed Use of Facial Recognition
Robert Williams was arrested at his home in front of his wife and two young daughters for allegedly shoplifting watches from a Detroit store. This accusation was based on a year-old surveillance video of a shoplifter whose face was obscured and poorly lit, leading the facial recognition system to incorrectly identify Mr. Williams.
The detective responsible for the investigation failed to perform a thorough investigation, relying almost entirely on the facial recognition match. This match was based on an outdated driver’s license photo of Mr. Williams. Despite knowing the limitations and potential for error with facial recognition, particularly with poor-quality images and its higher rate of misidentification for Black individuals, the Detroit Police Department (DPD) proceeded with the arrest without seeking corroborating evidence.
Mr. Williams was detained for approximately 30 hours in a filthy, overcrowded cell without any clear information about the charges against him. The arrest and detention caused significant emotional and psychological harm to Mr. Williams and his family, particularly his young daughters who witnessed the arrest.
Mr. Williams filed a federal lawsuit against the City of Detroit, the then–police chief of the Detroit Police Department, and the detective responsible for investigating Mr. Williams. The matter settled in June 2024. As part of the settlement agreement, the DPD will enforce a new directive that sets strict guidelines on the use of facial recognition technology. This technology can now only be used in investigations of serious crimes, such as violent offenses or first-degree home invasions. Moreover, any leads generated by facial recognition must be corroborated by additional, independent evidence before the police can make an arrest or request a warrant. The directive also explicitly prohibits the use of facial recognition for surveillance, live streaming, or analyzing recorded videos, ensuring that the technology is not misused.
In addition to changes in how facial recognition is used, the DPD has committed to revise its procedures for eyewitness identifications and lineups. These revised procedures are designed to minimize the risk of misidentification, including preventing witnesses from being informed that a suspect was identified through facial recognition. The DPD also will ensure that all lineups are conducted in a way that reduces suggestiveness, further protecting against wrongful identifications.
To address past cases where facial recognition was used, the DPD will conduct an audit of all relevant cases dating back to February 2017. This audit will examine whether there was sufficient independent evidence to justify any arrests or warrants that followed the use of facial recognition technology. Any cases found to lack proper evidence will be reported to the appropriate prosecutor for review.
The settlement also mandates that the DPD provide comprehensive training to its detectives, investigators, and supervisors on the correct use of facial recognition technology and the new eyewitness identification procedures. This training is intended to ensure that all officers are fully aware of the limitations of facial recognition and the importance of corroborating evidence before taking any legal action.
For the next four years, the DPD is restricted from making any substantive changes to these new policies that would reduce the protections they afford. The ACLU Fund of Michigan must review any proposed changes, ensuring continued oversight of the DPD’s use of this technology.
In total, these steps ensure that FRT is provided human-powered guardrails to maintain its transparency and appropriate use in criminal investigations in the City of Detroit.
The FTC Steps In: Rite Aid’s FRT PracticesProhibited for Five Years
The Federal Trade Commission prohibited Rite Aid, a retail drugstore chain, from using facial recognition technology for surveillance purposes for a period of five years. Press Release, Fed. Trade Comm’n, Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology Without Reasonable Safeguards (Dec. 19, 2023).
On December 19, 2023, the FTC filed a complaint against Rite Aid, a retail drug store. The FTC alleged that from approximately October 2012 until July 2020, Rite Aid utilized FRT in hundreds of its retail locations to identify customers in predominantly low-income, nonwhite neighborhoods that it had “previously deemed likely to engage in shoplifting or other criminal behavior” in order to prevent these individuals from entering its stores. Complaint for Permanent Injunction, Fed. Trade Comm’n v. Tie Aid Corp., Case No. 2:23-cv-5023 (E.D. Pa. Dec. 19, 2013).
Based on the FRT match alerts, Rite Aid employees increased surveillance on suspected people, banned people from entering or making purchases at Rite Aid stores, publicly accused people of past criminal activity, detained individuals, searched people, and called the police on people. Id. In numerous instances, FRT’s match alerts were based on false positives (or incidents where FRT incorrectly identified a person in Rite Aid’s database). The FTC alleged that Black, Asian, Latino, and female customers were especially likely to be harmed by Rite Aid’s use of FRT matches.
Similar to the New Jersey and Detroit matters discussed above, the bulk of the issues the FTC lodged against Rite Aid stem from its use in matters involving people of color and a failure to have proper human oversight of the use of FRT. The FTC cited human-related failures, such as Rite Aid’s failure to train and oversee employees using the FRT, as increasing the likelihood of harm to consumers. Further, the FTC criticized Rite Aid for not properly tracking false positives or recording outcomes with FRT usage.
Future Considerations for the Use of FacialRecognition Technology
There is currently a patchwork of local and state laws concerning the use of FRT. Although there are some isolated enforcement actions by federal agencies on the use of FRT, no federal facial recognition law currently exists. Federal legislation on appropriate FRT guidelines should be structured with transparency in mind. The main factors in any FRT legislation are as much about the appropriate use of FRT and the underpinning technology of the facial recognition system.
The court in Arteaga took a broad view of the broader public trust benefits in making FRT discoverable in a criminal matter. This transparency—of both the individuals using an FRT system and the FRT system itself—not only may benefit an accused person, but also the criminal justice system as a whole. Transparency and rigorous challenges encourage the continuous testing of the efficacy of usage and data being relied upon for an FRT system. Part of ensuring transparency is for attorneys to challenge FRT.
As this technology continues to be so rapidly adopted, attorneys must quickly catch up to understand how this technology works so that it can be tested in court. Defense counsel in Arteaga challenged not only the content of the FRT but also the underpinning technologies. Counsel should consider updating discovery demands to probe whether facial recognition technology, or other forms of artificial intelligence, were employed. Further, if it appears that discovery demands were only partially complied with, counsel should consider a motion practice that compels production of the underpinning FRT technology, how FRT was implemented, what entities utilized FRT, and whether FRT use was appropriate.
As has been illustrated by the NIST, FRT is still suffering from problems with implicit bias. By making the procedures and data upon which FRT relies available in discovery, its deficiencies and biases could be exposed. Such challenges would encourage an FRT provider to continuously further fine-tune its models to increase its accuracy across a wider range of human characteristics. Any system, like a facial recognition platform, is only as accurate and useful as the user operating it and the data it is provided.
To be sure, algorithms being used to generate forms of FRT are becoming exponentially more sophisticated. Accordingly, the future of these emerging artificial intelligence technologies is still unclear. However, artificial intelligence technologies, such as FRT, are not infallible. To effectively represent clients in this new era, attorneys must continuously educate themselves with the knowledge of how forms of AI work and develop strategies to contest the use of AI if used improperly.
Conclusion
The integration of FRT into modern life has ushered in a new era of possibilities, but also profound challenges. The potential of FRT to enhance security and expedite investigations is undeniable. However, the blanket application of FRT without proper human oversight raises serious concerns regarding privacy, racial bias, and the potential for misuse. Striking a balance between these competing interests is an ongoing struggle, as evidenced by the varying legislative approaches and legal cases discussed above.
As AI and FRT continue to evolve, it is imperative that society grapples with these complex issues to ensure that the use of emerging technologies aligns with constitutional principles and safeguards individual rights. The path forward requires a nuanced approach that harnesses the benefits of FRT while mitigating its risks. As the past four decades have shown, FRT is only going to continue to evolve. That evolution is only speeding up with advancements in artificial intelligence. The law also must evolve to shape a future where technology and civil liberties can be balanced.