chevron-down Created with Sketch Beta.
February 20, 2024 Feature

How Cognitive Bias Can Impair Forensic Facial Identification

Devon E. Labat and Jeff Kukucka

Have you ever been told that you resemble a relative who actually looks quite different than you? You’re not alone. In a 2002 study, participants were asked to judge the similarity of numerous pairs of faces, and they perceived the faces as more similar if told that the two people were genetically related—even if they weren’t. In fact, participants’ beliefs about whether the two people were related had a stronger effect on their perceptions than whether the people were actually related.

Psychologists have long understood that our beliefs, desires, and expectations influence how we see the world around us, such that people with different mindsets often interpret the very same information in markedly different ways. Imagine, for example, watching a sporting event with a friend and disagreeing over a referee’s decision, even though you both saw the same play. Or remember when society lost its collective mind over whether “the dress” was black and blue or white and gold? These disagreements reflect a phenomenon known as cognitive bias.

In a nutshell, cognitive bias is our brain’s way of simplifying our environment by using our personal beliefs and experiences to select, organize, and interpret new information more efficiently. Cognitive bias is often beneficial because it subconsciously distorts our perception in ways that encourage adaptive behavior. In one study, for example, people who were wearing a heavy backpack estimated the same hill to be steeper than other people who weren’t wearing a backpack, which is our brain’s subtle way of discouraging us from climbing that hill.

However, because cognitive bias produces differences in perception, it can also impede efforts to determine the truth—such as in forensic science analyses. That is to say, if cognitive bias leads two forensic examiners to form different opinions of the same evidence, then at least one of them must be wrong, which can trigger a costly miscarriage of justice if left unchecked.

Importantly, cognitive bias has a stronger effect on judgments that are challenging rather than clear-cut. In forensic facial comparisons, an examiner compares an image of an unknown person (often the perpetrator of a crime) against one or more known individuals and decides which, if any, of those individuals’ faces matches the unknown person. This is a difficult task; even professionals who have specialized training and ample experience in face matching tend to exhibit high error rates (7–31%) on tasks designed to mimic real-world facial comparisons.

Context Distorts Forensic Examiners’ Judgments

Cognitive bias can further impede facial identification accuracy, especially when examiners receive information that is irrelevant to the facial comparison. It is well known that the same image can be interpreted differently depending on the context in which it appears; for example, when researchers showed people an ambiguous drawing that could be perceived as either a squirrel or a swan, they tended to see it as a squirrel when it appeared on a tree branch but as a swan when it appeared in a pond. As for “the dress,” one study found that people who self-identified as “night owls” more often saw it as blue and black, whereas “morning people” tended to see it as white and gold.

Similarly, a 2009 study showed how extraneous case information can bias judgments of facial similarity in forensic settings. Participants compared a computer-generated facial composite of a perpetrator against a lineup of four suspects, and they were told that one of the suspects (chosen at random) had previously been identified by two eyewitnesses. When asked to rate the similarity of these faces, participants rated whichever suspect was ostensibly identified by the eyewitnesses as looking most similar to the composite, and, accordingly, they believed most strongly in that suspect’s guilt.

These findings are not a fluke; there is now a wealth of research across many disciplines showing that knowledge of extraneous information can distort forensic examiners’ judgments. As a result, the National Commission on Forensic Science has urged practitioners to “rely solely on task-relevant information when performing forensic analyses,” but, unfortunately, it appears that many forensic examiners still receive potentially biasing information—such as information about the suspect’s criminal history—as a matter of standard procedure.

Relatedly, many forensic labs require examiners’ opinions to be verified by a colleague to ensure that they agree. In theory, verification is a useful quality control measure—but, in practice, the verifying examiner is often aware of the original examiner’s opinion, which creates a prejudice to agree with it. For example, one firearms laboratory found that verifiers were much less likely to disagree with the original examiner’s opinion if they were aware (12.5%) rather than unaware (42.3%) of it. Thus, the standard method of verification is treacherous in that it provides only the illusion of independent corroboration.

Sequence of Images Influences Forensic Examiners

Even absent extraneous information, the process by which examiners compare unknown and known faces may prompt bias. In psychology, an order effect occurs when the same information presented in a different order elicits a different judgment. In one classic study, participants evaluated a person who was described as “intelligent, impulsive, critical, stubborn, and envious” more favorably than other participants who read the same traits in reverse order. They were also more likely to interpret “critical” as a positive (i.e., inquisitive) rather than negative (i.e., disparaging) trait.

The sequencing of information can likewise affect the rigor of forensic analyses. In a 2011 study, fingerprint experts were asked to count the number of minutiae (i.e., distinctive features) in several latent prints, each of which was shown either alone or with a potential match. The results showed that experts actually counted more minutiae in the same latent prints when they were shown alone. According to the researchers, the presence of a potential match led experts to selectively look for similarities between the two prints and thus overlook minutiae that they would have otherwise noticed.

Informed by these findings, psychologists now urge forensic examiners to analyze the unknown sample on its own before making a side-by-side comparison against a known sample—and, more broadly, to make (and document) thoughtful advance decisions about what information they will review and in what order. In forensic facial comparison, where focusing on diagnostic facial features (similar to fingerprint minutiae) has been shown to enhance identification accuracy, examiners should therefore analyze the unknown image for such features before comparing it against any known images.

The Cognitive Bias Blindspot for Forensic Scientists

Can we prevent bias? And, if so, how? One influential theory explains that in order to overcome bias, we must be (a) aware of the bias, (b) motivated to correct it, (c) aware of its direction and magnitude, and (d) capable of correcting it. As for awareness, a global survey found that 71% of forensic examiners recognized cognitive bias as a problem for the forensic sciences as a whole, but only 52% saw it as a problem for their own discipline, and only 26% believed that it could affect their own judgments. This pattern reflects a phenomenon called the bias blind spot, wherein people recognize others’ biases but not their own, and thus see no need to change their behavior.

In that same survey, 71% of forensic examiners believed that bias can simply be willed away and 32% felt that experience protects against bias—but research suggests otherwise. With respect to the former, examiners must understand that cognitive bias is a natural byproduct of how the brain works; it is not a sign of incompetence, carelessness, or dishonesty. However, one new study suggests that examiners may be able to self-regulate bias to some degree by explicitly considering alternative hypotheses. In that study, doctors who were given two possible explanations for the same injury (e.g., assault with a weapon versus dog bite) were less certain in their interpretations and less influenced by extraneous contextual information than doctors who considered only one possibility.

With respect to experience, some have argued that experts are actually more susceptible to bias insofar as they rely more on “cognitive shortcuts.” These shortcuts develop over time and are generally helpful because they allow experts to process information more efficiently, but they can also limit experts’ cognitive flexibility and cause them to miss crucial information. For example, chess experts are much better than novices at remembering realistic arrangements of pieces on a chessboard, but they perform no better—or even worse—than novices if the pieces are arranged at random.

Inherent Bias in Forensic Science Technology

Forensic science’s growing reliance on technology is surely a positive trend as it encourages standardization and objectivity. But does it eliminate bias? Take fingerprint identification, for example. Many fingerprint examiners can now enter an unknown latent print into a digital database (e.g., AFIS) and quickly receive a rank-ordered list of potential matches, which is extremely useful. However, the final decision is still made by a human examiner with an imperfect and bias-capable brain.

These technologies may even create new biases. In a 2012 study, researchers entered latent prints into AFIS to generate rank-ordered lists of potential matches. Then, they asked fingerprint experts to decide which (if any) of those prints matched the latent print—but without the experts’ knowledge, they had randomized the rank order of those prints. This revealed a bias; experts spent more time analyzing whichever print happened to appear at the top of the list and were more likely to select that print as a match to the latent print, regardless of whether it actually matched.

Forensic facial examiners also use database searches to identify potential matches to an unknown face, but the final judgment is made by a human who may likewise favor faces that the software ranks more highly. Moreover, the limitations of the algorithms that underlie these searches are well-known, especially their relative inaccuracy for women and people of color, which may affect who appears as a potential match or is included in the database to begin with. Finally, these databases may also disclose biasing information (e.g., criminal history) that taints the examiner’s judgment.

Lessons for Forensic Facial Examiners from Eyewitness Identification

There are several parallels between forensic facial identification and eyewitness identification that suggest avenues for reform. For instance, eyewitness identification procedures customarily embed the suspect’s photo in an array of known-innocent “filler” photos because showing only the suspect’s photo is known to increase the risk of misidentification. Similarly, if a suspect has been identified for comparison to an unknown face, the suspect’s face could be embedded among several similar but known-innocent faces, which a database search could generate. This procedure—called an “evidence lineup”—also creates the opportunity for the examiner to commit a known error (i.e., a misidentification of a known-innocent face) and thus provides a means of proficiency testing as well.

In cases where no suspect has been identified, forensic facial examiners should be very cautious about executing unconstrained database searches. In such situations, every person who the algorithm identifies as a potential match to the unknown face effectively becomes a suspect. This practice in essence produces an “all-suspect lineup,” which is known to dramatically increase mistaken eyewitness identifications and is strongly discouraged by researchers in that area.

Preventing Cognitive Bias in Forensic Decisions with Procedural Intervention

To sum up, cognitive bias is an inherent feature of human cognition that can impede the search for truth by leading people to different interpretations of the same information. Because it cannot be eliminated by willpower alone, preventing cognitive bias in forensic decisions requires procedural intervention. Research suggests that examiners, including those who search large digital databases for potential matches to unknown samples (e.g., forensic facial examiners), should:

  • Have a case manager or supervisor identify and redact any extraneous and potentially biasing information (e.g., criminal history) from the case materials prior to analysis.
  • Analyze and document the distinctive features of the unknown sample (e.g., face) on its own before making any side-by-side comparison(s) to known samples.
  • If no suspect has been identified, have a colleague randomize the order of database search results prior to analysis and retain the full list of search results for purposes of discovery.
  • If a suspect has been identified, embed their sample in a lineup of similar but known-innocent filler samples from the database.
  • Document which materials they reviewed and in what order, when and how (if at all) their opinion changed over time, and any measures they took to protect against bias.
  • Have their opinion independently corroborated by a qualified colleague who examines the same case materials but is kept unaware of their opinion (i.e., blind verification).

Implementing such protections against cognitive bias in the forensic sciences will reap extensive benefits. For defendants, it will reduce the risk of wrongful conviction—as well as the concomitant risk to public safety when the actual perpetrators are left free to reoffend. For practitioners, it will increase judicial trust in their decisions by inspiring confidence that those decisions stem from their unique skill and not from outside influences. And for fact-finders who tend to trust those decisions indiscriminately, these simple and research-based measures will elevate the “science” in forensic science.

    Entity:
    Topic:
    The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

    Devon E. Labat

    Florida International University

    Devon E. LaBat is a doctoral candidate in legal psychology at Florida International University in Miami, Florida.

    Jeff Kukucka

    Towson University

    Jeff Kukucka, PhD, is an associate professor of Psychology at Towson University in Towson, Maryland, and co-vice chair of the OSAC Human Factors Task Group.