July 15, 2019 Feature

Arcing Toward Justice: Can Understanding Implicit Bias Help Change the Path of Criminal Justice?

The Honorable Bernice B. Donald and Sarah Redfield

Author’s Note: This article is rich in citation, much referencing statistics or science journals. To keep it more readable, we have deviated from our usual citation approach and used a + symbol to indicate that further citation detail is available. Anyone wanting full citations or further background should reach out to Professor Redfield.

Writing last year in Scientific American, UNC psychologist and neuroscientist Keith Payne and his colleagues captured years of scientific and legal research and analysis this way: “Amidst a controversy, it’s important to remember that implicit bias is real—and it matters.” (Keith Payne, Laura Niemi & John M. Doris, How to Think About “Implicit Bias”, Sci. Am. (Mar. 27, 2018).) We agree.

As we explore these important and real issues in the context of criminal justice, we first acknowledge the work of all our own colleagues—particularly those who joined us in writing the ABA book on implicit bias, Enhancing Justice: Reducing Bias, and all the others in the Criminal Justice Section—for bringing and keeping this concept in the forefront of our efforts to achieve justice and equity. We set out here to share our knowledge and experience in understanding implicit bias, its manifestations, and its potential disrupters. In asking you to engage with us, we also acknowledge that while we ourselves know some things about these topics, we do not know everything, or even close. We invite our readers to enrich our shared knowledge base by being in touch with your experience and expertise.

This is the first of two Criminal Justice articles on this topic. When we present and train about implicit bias, we focus on three areas: becoming aware, becoming bias literate, and strategies for disrupting implicit bias/responses. Here we write on the first two. Becoming aware covers definitions and basic science concepts, as well as manifestations of implicit bias. Becoming bias literate expands on the definitions and discusses two related concepts, group preferences and unintended micromessaging. A future article will discuss implicit bias training and research-based strategies for disrupting implicit bias/responses.

Become Aware

How We Ourselves Became Aware and Why It Has Value

From our personal and professional lives, we both know what explicit, direct bias is, conceptually and as-experienced. These only-too-real stories are for another day. Here we note that in our personal and professional lives, we had become frustrated and discouraged by what we saw as a lack of meaningful change in the disproportionalities and inequities in American society. In its 1999 report on perceptions of justice, the ABA found that although most respondents thought the American justice system was the best in the world, just about half thought men and women were treated equally and “even fewer [39 percent] believe that among racial or ethnic groups or between wealthy and poor people the treatment is equal.” (Am. Bar Ass’n, Symposium II: Public Understanding and Perceptions of the American Justice System, 62 Alb. L. Rev. 1307, 1317 (1999).) More recent surveys from Pew Research show that similar perceptions of the courts, police, and other American institutions remain today. (See Pew Research Ctr., On Views of Race and Inequality, Blacks and Whites Are Worlds Apart, Pew Soc. Trends (June 27, 2016).+) Criminal law expert Dr. Philip Atiba Goff has described this as a riddle: “How does one explain persistent inequality in the face of declining racial prejudice? . . . A related and equally provocative question, however, is this: Why have we not answered this question yet?” (See Phillip Atiba Goff, A Measure of Justice: What Policing Racial Bias Research Reveals, in Beyond Discrimination: Racial Inequality in a Post-Racist Era 157 (Fredrick C. Harris & Robert C. Lieberman eds. 2013).

We know, as do our readers, that solving this riddle is critical. Finding a way to have justice be and be perceived to be fair and equitable is basic to the survival of our system of justice. Certainly we, together with so many others in the profession, have been part of effort after effort to these ends: studies, reports, blue ribbon commissions, task forces, calls to action, signatories to pledges. Surely, we have intended change. Thinking of the riddle Professor Goff posed, we had to ask ourselves why all this work and good intention had produced relatively little change. We were looking for an answer that didn’t necessitate our seeing ourselves as motivated by racism, sexism, or some other *ism or phobia. We were looking for an answer where we could still see those who were making key decisions (that cumulated in such disturbing results), as acting in good faith. Innovative methods of research from the scientific community offered an explanation. (Mahzarin R. Banaji & Anthony G. Greenwald, The Implicit Revolution: Reconceiving the Relation Between Conscious and Unconscious, 72 Am. Psychol. 861, 869 (2017).+)

We were introduced to this new science perspective through early work of the Section of Litigation and the Criminal Justice Section. Here we found an answer—not a complete answer, but a partial, new answer—to the question of why progress on these issues was so limited. The innovative research that measured implicit response offered an answer that did not require considering ourselves or others as *ists of some kind. Instead, the science offered an answer that allowed us to see that our implicit responses might be contributing to inequities in ways we explicitly neither knew nor intended. Decision makers could well be acting in all conscious good faith, but they may also be responding implicitly, influenced by not-conscious bias that interfered with legitimately and honestly held unbiased views.

For us, this discovery—that our brains can legitimately hold two views simultaneously, one direct and explicit, the other indirect and implicit—is a gamechanger. We looked at our experiences and our colleagues differently. We saw things through a new research-based lens. We saw that if we could build awareness of this new perspective, if there were ways to disrupt implicit bias and bring our decision making more in line with our consciously articulated views on equality, change might ensue. MIT biologist Dr. Nancy Hopkins described it this way: “If you asked me to name the greatest discoveries of the past 50 years, alongside things like the internet and the Higgs particle, I would include the discovery of unconscious biases and the extent to which stereotypes about gender, race, sexual orientation, socioeconomic status, and age deprive people of equal opportunity in the workplace and equal justice in society.” (Nancy Hopkins, Amgen, Inc., Professor of Biology at Mass. Inst. Tech., Boston University Baccalaureate Speech: Invisible Barriers and Social Change, Marsh Chapel (May 18, 2014).) As we discuss in the science and manifestation headings below, we agree.

Defining Implicit Bias

In Thinking Fast and Slow, Nobel Prize–winning economist Daniel Kahneman describes the way our brains function in two systems: System 1, fast, automatic, and reflexive; System 2, slow, deliberative, reflective, and logical. (Daniel Kahneman, Thinking Fast and Slow (2011).) Other scientists have described us as cognitive misers, meaning we tend to default to the path that takes the least energy, System 1.+

Understanding bias as a tendency for or against something, definitions of implicit and explicit bias track these. Implicit biases, thinking fast, are learned associations absorbed from all around us, outside our direct awareness or control, that affect our understanding, actions, and decisions. Explicit biases, thinking slow, are our verbally endorsed evaluations, deliberately generated understandings, actions, or decisions, directly experienced as our own.+ Recognizing ourselves as cognitive misers suggests why our quick implicit biases may need to be disrupted with more conscious thought.

The Science

We began with psychologist Dr. Keith Payne’s summary: “implicit bias is real—and it matters.” (Payne, Niemi & Doris, supra.) Many other researchers, including social scientists, neuroscientists, and legal analysts, have concluded likewise. Howard Ross, a popular trainer in the field, puts it this way: “There have been more than 1,000 studies in the past ten years alone on the impact of unconscious bias, conducted in the best academic institutions by some of the smartest social scientists and neuroscientists in the world. This data unquestionably establishes how bias occurs and why.” (Howard Ross, Does Unconscious Bias Training Work? (2017). These researchers have made clear that there is indeed now too much research to ignore, that discrimination is likely driven as much by implicit bias as by explicit prejudice, that implicit associations affect legal decision making, and that this is indeed part of the answer to the riddle of why so slow. (See, e.g., B. Keith Payne & Heidi A. Vuletich, Policy Insights from Advances in Implicit Bias Research, 5 Behav. & Brain Sci. 49, 50 (2018).+) In this section, we consider the science that led to these conclusions, in particular the development of the Implicit Association Test and some of the underlying science building blocks.

It used to be that if researchers wanted to know if you were biased, they asked. While this yielded some information, it was likely to be shaded by responders’ not wanting to admit bias or perhaps not even recognizing their own biases.+ Starting in the mid-1990s, scientists, led by Drs. Tony Greenwald and Mahzarin Banaji, identified a new approach. They no longer asked about bias; they tested and they measured. The most popular of these measures is the Implicit Association Test (IAT). The IAT is typically taken online, where you are presented with stimuli (prompting words or pictures) and two response options (typically two keys, E & I). In the early iterations of the IAT, researchers were surprised to find a strong difference in response time when the same key was used for flower and pleasant compared with prompts where the same key was used for insect and pleasant. This difference in response time is an indirect measure of the closeness of the taker’s associations and has come to measure what we now describe as implicit bias. Many IATs and related tests of implicit response have become available. Readers can find and take the IAT at https://implicit.harvard.edu/ in subjects such as race, religion, gender, age, sexuality, disability, weapons—even food. Since its launch, Project Implicit has recorded millions of IAT results, concluding in the aggregate that implicit biases are pervasive, and that a substantial majority of takers demonstrate automatic preferences, for example, for European American over African American, for women and families over women and careers, for abled over disabled.+

We recognize that in some ways, the IAT and all this talk of implicit is the antithesis of what we all learn in law school about thinking like a lawyer. In this context, we have found it helpful to take a step back and try to relate some of the underlying theory to what we know about ourselves. We ask that you be a bit patient here. . . . Let’s start with this. We say car. At this point you probably have a mental picture of a four-wheeled motor vehicle. These are characteristics cars have in common, and they fit a schema (shortcut) in our brains. The idea is that one thing follows quickly along from another; one schema activates others, e.g., car, race car, speeding car, broken car, police car, toy car. These schemas help us quickly approach or avoid the car in question. We might also address our direct conscious thinking to a car in question: Why is that car parked by my house, why is that car weaving on the road, do I want to buy that car, and so on. These would be explicit questions, but here we are discussing our implicit responses. There is much research on how these implicit, indirect connections work. For example, in one experiment when primed with either reckless or exciting, participants perceive otherwise ambiguous behavior (e.g., skydiving) consistent with how they were earlier primed. (E. Tory Higgins, William S. Rholes & Carl R. Jones, Category Accessibility and Impression Formation, 13 J. Experimental Soc. Psychol. 141, 141(1977).) In another experiment, participants primed with hostile words perceived otherwise ambiguous behavior as hostile. (John A. Bargh & Paula Pietromonaco, Automatic Information Processing and Social Perception: The Influence of Trait Information Presented Outside of Conscious Awareness on Impression Formation, 43 J. Personality & Soc. Psychol. 437, 437 (1982).) The research also shows that the more ambiguous the information, the more likely we are to fall into implicit bias and stereotypes. Consider this last point again, please, in the context of the work we do every day, surrounded by ambiguity.

Our brains work from such schemas all the time. We have to. Scientists estimate that we receive some 11 million (yes, million) bits of information in a second, but we consciously process only 40. (David Disalvo, Your Brain Sees Even When You Don’t, Forbes, June 22, 2013.) Given this, we need these schemas or shortcuts to survive. We have schemas for words, sounds, images, and activities. One schema connects to others. They serve us when we respond quickly to car; they are at work when we tie our shoes or order in a restaurant. We also have schemas for people. We may have a schema for a particular group, and that schema will likely activate others. Just as flower might activate pleasant, Emily might activate woman, caring, family. As these examples suggest, schemas can call up our attitudes (likes and dislikes) and also our stereotypes. And one last point here, once primed, once our schemas and stereotypes are activated, like the anchoring bias discussed a bit later, they are hard to let go, even if contradictory information is offered, even we are told to forget it. (Jonathan M. Golding & Jerry Hauselt, When Instructions to Forget Become Instructions to Remember, 20 Personality & Soc. Psychol. Bull. 178 (1994).)

These quick indirect connections in the way our brains work explain the IAT. For example, in the age IAT, we are prompted by a picture of an old/young face and asked to pair it with an evaluative measure, good/bad over several iterations. The test measures how strong our associations are by measuring the difference in speed with which we respond. Shown the prompt, our mental schema is activated; the faster we respond, the more positive our evaluations or attitudes. Most of us (77 percent) evaluate young more positively. That is, if we don’t have time to think consciously (thinking slow, System 2) or to know something individual about this old-looking person, we default to our schemas and stereotypes.

The race IAT works similarly and shows that 71 percent of takers prefer European Americans to African Americans. This preference is consistent with other implicit bias research documenting the relationship of race to criminal perception. For just one example, Stanford Professor Jennifer Eberhardt and her colleagues have observed that the “stereotype of Black Americans as violent and criminal has been documented by social psychologists for almost 60 years.” (Jennifer L. Eberhardt, Phillip Atiba Goff, Valerie Purdie & Paul G. Davies, Seeing Black: Race, Crime, and Visual Processing, 87 J. Personality & Soc. Psychol. 876 (2004).) Their more recent experiments (which are many and offer new and deeper understanding only touched upon here) expand on this concerning stereotype and have shown the complexity of implicit associations:

Visual processing patterns may provide ample opportunities for perceivers to access race–crime associations, as well as to rehearse, strengthen, and supplement those associations. . . . Activation of the crime concept not only led police officers to attend to a Black face but also led them to misremember the Black face as more stereotypical (i.e., representative) of the Black racial category than it actually was. Thus, the association between blackness and criminality was not only triggered, it was magnified.

(Id. at 891.)

Another study by law professor and psychologist Justin Levinson and his colleagues measured memory of facts in a fictional dispute and found similar influence of race. Participants read a story describing a fist fight between two men after an altercation in a bar. Character names Tyronne, Kawika, or William linked to African American, Hawaiian, and white. When asked to recall the facts, participants remembered them in a racially biased way. Those who read the Tyronne story recalled accurately 80.2 percent of aggressive actions; Kawika, 71.8 percent; and William, 67.8 percent. The study also looked at false memories of mitigating facts about the fight. Here, readers were more likely to falsely remember that Kawika was provoked than either William or Tyrone. (Justin D. Levinson, Danielle M. Young & Laurie Rudman, Forgotten Racial Equality: Implicit Bias, Decisionmaking, and Misremembering, 57 Duke L.J. 345, 348–49 (2007).) In another experiment, psychologist Birt Duncan asked participants to interpret a video showing a fictional and ambiguous shove “not clearly aggressive, but not clearly in jest.” Where the shover was black, the action was interpreted as more aggressive and violent; where white, as kidding around. (Birt L. Duncan, Differential Social Perception and Attribution of Intergroup Violence: Testing the Lower Limits of Stereotyping of Blacks, 34 J. Personality & Soc. Psychol. 590 (1976).)

Experiments like these and the IAT results show a preference for the dominant group. This preference turns out to hold, albeit to a lesser extent, among those in the nondominant group. For one example, the Race IAT, Professors Lai and Banaji explain that among African Americans taking the test, 40 percent of blacks show pro-black implicit preference, 35 percent show pro-white, and 25 percent show no overall preference. If the black data were the mirror image of whites where most black takers aligned with their black ingroup, one would expect 73 percent to test as pro-black. The researchers explain the difference (for this and other groups as well) as arising from the mixed roles an individual brings to the test—group preferences for one’s own and cultural preferences reflecting lower standing in society. (Calvin K. Lai & Mahzarin R. Banaji, The Psychology of Implicit Intergroup Bias and the Prospect of Change 6 (2018).

The significant points to take away from this science interlude are these: Our biases can be measured with instruments like the IAT (and we urge all of you to try a test). Our biases, as revealed by such measures, are implicit. Our biases may well not correspond with our explicit, consciously, and honestly held beliefs. To illustrate, we share our own results on the IAT as a case in point: When Sarah takes the Women/Career IAT, she fails to show a preference for women and careers, notwithstanding that she has worked her entire life and has unbounded respect for her professional women colleagues. Her biases are part of being human and do not mean she is a sexist. But does implicit bias influence her? Yes. Can we interrupt that influence? Yes, if we are motivated to do so.

Evidence of the Manifestation of Implicit Bias

We hope the examples in the science section have enticed you to begin to think a bit differently about the way we think—not always like a lawyer?!

In training, we often offer the participants a look at an illusion or two. Pictured here is an illusion called The Coffer, which we share courtesy of Stanford Professor Anthony Norcia. Most of us don’t see any circles at first, but we can see they are there when our attention is focused on them. We suggest that illusions like this are a way of thinking about implicit and explicit.+ The circles are there, but they are less accessible than what most all of us first see as a door. The circles become obvious when our conscious brain starts to deliberately pay attention.

This kind of illusion is fun, and shows us how the science plays out, but the real significance of implicit is in the real world. A key piece of bias literacy is becoming aware of the manifestations of implicit bias in the world around us. Here we illustrate in two ways: life experience, often what we already knew, and manifestations in intransigent, otherwise hard-to-explain disproportionality data.

Manifestation, Life Experience, What We Already Knew: We Can Be Responding Implicitly While Holding a Different Explicit Position

Lawyers have long known about implicit bias, without the modern-day label. Our colleague Judge Mark Bennett brought this 1928 case to our attention. Earl Hayward, a young African American man charged with the rape of an older white woman was represented by Lena Olive Smith, a civil rights lawyer and the first African American woman in the Minnesota bar. After Mr. Hayward’s conviction, Smith moved for a new trial on several grounds, including prosecutorial misconduct based on the prosecutor’s appeal to the racial prejudice of the jury. Smith pointed to the prosecutor’s having asked the all-male white jury, “Are you gentlemen going to turn this Negro loose to attack our women?” (Defendant’s Motion for New Trial, State v. Hayward, No. 26241 (4th D. Ct. Minn. June 18, 1928).) Smith wrote (in the language of her times):

The court fully realizes I am sure, that the very fact that the defendant was a colored boy and the prosecutrix a white woman, and the entire panel composed of white men—there was a delicate situation to begin with, and counsel for the State took advantage of this delicate situation.

She went on to observe that

perhaps [the jurors] were, with a few exceptions, conscientious in their expressions [of no race prejudice]; yet it is common knowledge a feeling can be so dormant and subjected to one’s sub-consciousness, that one is wholly ignorant of its existence. But if the proper stimulus is applied, it comes to the front, and more often than not one is deceived in believing that it is justice speaking to him; when in fact it is prejudice, blinding him to all justice and fairness.

Ms. Smith won a new trial for Mr. Hayward, who was eventually freed, though not on these grounds. (Ann Juergens, Lena Olive Smith: A Minnesota Civil Rights Pioneer, 28 Wm. Mitchell L. Rev. 397 (2001).) Looking back, we might well rephrase Smith’s argument today: The jurors may well have conscientiously held to their express avowal of being able to decide the case without racial prejudice, but when race was primed, that prejudice came into the decision implicitly.

Manifestation, Intransigent, Otherwise Hard to Explain Disproportionality

The research and data about criminal justice disproportionalities and inequities are stunning in their breadth and depth. (Sarah E. Redfield & Jason Nance, Reversing the School-to-Prison Pipeline Preliminary Report, 47 U. Memphis L. Rev. 1 (2016).+) We have a prison system where prisoners are disproportionately people of color as well as persons with disabilities, and where guards are disproportionately white. (See, e.g., Michelle Alexander, The New Jim Crow: Mass Incarceration in the Age of Colorblindness (2010); Ashley Nellis, The Color of Justice: Racial and Ethnic Disparity in State Prisons 10–11 (2016).+)

But it starts far earlier than this, as early as preschool discipline. From preschool, disproportionate treatment continues along the so-called school to prison pipeline: suspension and exclusion, referral to law enforcement, and disparities in juvenile justice (including differences in diversion, retention, placement in locked facilities, terms of probation, and trial as adults). (U.S. Dep’t of Educ. Office for Civil Rights, Civil Rights Data Collection [CRDC], Data Snapshot: School Discipline 6, 7 (2014).+) Disparities are then evident in pre-sentence reports, sentencing (including recognizing remorse and including the death penalty), prison discipline (including appeals/complaints), and parole. This list is illustrative, not complete.+ It is daunting. In addition to points where implicit bias is documented in the statistical data, differences also are documented regarding the participating decision makers from teachers to police officers, prosecutors, defenders, witnesses (including memory), judges, juries, and probation officers. (See, e.g., Robert J. Smith & Justin D. Levinson, The Impact of Implicit Racial Bias on the Exercise of Prosecutorial Discretion, 35 U. Seattle L. Rev. 795 (2012).+)

With so many areas where disproportionality is large and long-standing, it is a bit difficult to choose examples. For our purposes here, we have chosen to expand on the issue of race, particularly black boys and men. It’s an example that offers clear statistical data and one where a look a bit behind the data shows implicit and discretionary behaviors that can be identified as leading to these disparate results. We start in preschool, where we see the roots of later concerns in education and juvenile justice. In 2014, the Washington Post ran a story by Tunette Powell with a headline “My Son Has Been Suspended Five Times. He’s 3.” We mention the Powells here to personalize the numbers; the Powell children were hardly unusual.

Nationally, we know from data from the Civil Rights Data Collection that black children are suspended far beyond their proportion in the population: 18 percent of the preschool population, 42 percent of the single suspensions, 48 percent of multiple suspensions. The data are similar for other grades and are repeated in the District of Columbia and in states across the nation. At its simplest, it would seem clear that being out of school is not a good choice for educating our children. More specifically, the data are clear that once suspended, a child is more likely to be increasingly absent and more likely to be suspended again.+ The more suspended you are, the more likely you are to become involved in the juvenile justice system. (Tony Fabelo et al., Breaking Schools’ Rules: A Statewide Study of How School Discipline Relates to Students’ Success and Juvenile Justice Involvement 37 (2011).) Before leaving this statistical profile, we point out here that many of the decisions that make this a reality and populate the school-to-prison pipeline are discretionary. Just as research shows us that we are more likely to fall back on our stereotypes when presented with ambiguity, it shows us that we are more likely to do the same thing when exercising our discretion. Consider this last point again, please, in the context of the work we do every day, making decisions, exercising our discretion.

In some ways, this is old news, disturbing as it still is. But we chose this example to add a second layer of research offered by the work of the Yale Child Study Center. In the Yale study, which involved teachers and classroom aids (94 percent female and 66 percent white), researchers asked participants to watch videos of four preschoolers: black boy, black girl, white boy, and white girl. Participants were told that the experimenters were “interested in learning about how teachers detect challenging behavior in the classroom. Sometimes this involves seeing behavior before it becomes problematic.” The participants were instructed to watch the video of the children for problem behaviors and were told that the behaviors may or may not be present. As participants watched for problems, their eye movements were tracked by the researchers. Participants watched the black boy 42 percent of the time (followed by the white boy, 34 percent; white girl, 13 percent; and black girl, 10 percent). There were in fact no problem behaviors. (See Walter S. Gilliam et al., Do Early Educators’ Implicit Biases Regarding Sex and Race Relate to Behavior Expectations and Recommendations of Preschool Expulsions and Suspensions? Yale Research Study Brief (2016).) We know if you were to watch us so closely, well, you might find some problem behaviors that, of course, wouldn’t really be there. At the very least, you certainly wouldn’t notice any that might be there in the others you weren’t watching.

The Yale results connect to other research that shows the power and implications of race primes at early ages. Professor Andrew Todd and his colleagues in the Department of Psychology and Brain Science at the University of Iowa studied the response to primes of children as young as five. In the Todd research, participants primed with a child’s face categorized the second image shown to them based on that prime differently based on race: Black-child primes lead participants to misidentify toys as guns more often than white-child primes; with white-child primes, guns were identified as toys more often. (Andrew R. Todd, Kelsey C. Thiem & Rebecca Neel, Does Seeing Faces of Young Black Boys Facilitate the Identification of Threatening Stimuli?, 27 Psychol. Sci. 384 (2016).) Other researchers also have documented the relationship of race to school discipline of young children. For example, Jason A. Okonofua and Jennifer L. Eberhardt argue that “race not only can influence how perceivers interpret a specific behavior, but also can enhance perceivers’ detection of behavioral patterns across time.” (Jason A. Okonofua & Jennifer L. Eberhardt, Two Strikes: Race and the Disciplining of Young Students, 26 Psychol. Sci. 617 (2015).+)

We think the research on very young children serves as a highlight of how embedded implicit response is in our system of education and justice. We think it translates easily to other aspects of that system. We know that teachers are not explicitly biased against brown and black children to the extent that would explain these numbers. We also can see here how quickly impressions form and how hard they are to break, suggesting strongly the need for increasing our bias literacy and then learning techniques for disruption.

Becoming Bias Literate

Other Biases

The definitions of implicit and explicit are a foundation. We have found that knowing the names for other biases can be helpful as we consider the relevance of this work to our everyday lives. Here we offer just a short primer on three that are likely to be particularly useful for the legal community. There are many more, and they often occur together.

Confirmation bias is when we see what we expect to see. What we expect is more accessible than what we don’t expect. With this bias we seek, value, and remember information that supports our beliefs while ignoring or devaluing other information. (Andrew M. Coleman, Confirmation Bias, A Dictionary of Psychology (4th ed. 2015).) This bias helps maintain our stereotypes.

In what has become a classic experiment, our colleague Dr. Arin Reeves shows the confirmation bias that may be present at law firms. We are pretty clear that the partners participating in this study, if asked, would have told us honestly that they were not biased. Reeves asked 53 law firm partners to review a legal memo to help study the writing abilities of young associates. There were two memos, one from Thomas Meyer, identified as a third-year associate from NYU Caucasian, and the second from Thomas Meyer, identified as a third-year associate from NYU African American. Aside from these identifiers, the memos were identical. But the reviews were not. Overall scores were 4.1 for the Caucasian Tom Meyer’s memo and 3.2 for the African American Tom Meyer’s (identical) memo. Far more spelling and technical writing errors were identified in the memo labeled African American. And the comments were equally telling: Good analytic skills and Generally good writer with potential, compared to Needs lots of work, Average at best, and Can’t believe went to NYU. Although Dr. Reeves was studying implicit bias, we believe that if asked, these partners would have told us that they held no bias preferring their Caucasian associate partners over African American associates. Still, implicitly, their responses showed otherwise. (Arin N. Reeves, Written in Black & White: Exploring Confirmation Bias in Racialized Perceptions of Writing Skills 2–3 (2014).)

Reactive devaluation bias is a tendency to discount the value of information depending on who is offering it. (Lee Ross & Constance Stillinger, Barriers to Conflict Resolution, 8 Negotiation J. 389 (1991).) It’s easy to see how this might play out in many legal settings. In a negotiation, for example, we are perhaps too quick to dismiss an offer from opposing counsel, even if it is a reasonable offer, because it comes from the other side. It’s easy to see how this might be playing out in reviewing evidence as well.

Anchoring bias is a tendency to rely on the initial value as an anchor. What should a sentence be for someone? The prosecutor offers 10 years; the defense counters with 1; it’s likely the sentence will be different from a situation where the prosecutor began at 30 years. Our colleagues Judge Andrew Wistrich and Professors Chris Guthrie and Jeff Rachlinski tested this in an experiment with two groups of judges to see how they would be influenced by knowledge of amounts raised in settlement talks. Their fact pattern involved a high school teacher who had lost his right arm when hit by a truck. The control group of judges learned nothing about dollar value in the settlement information; there was a high-anchor group where the settlement figure mentioned was $10 million; and a low-anchor group, where that figure was $175,000. Even though when the case went to trial the settlement information would be inadmissible, both low and high anchors “had a significant impact on the pain and suffering damages” judges said they would award. For example, judges who had been in the low-anchor condition averaged $612,000 for the matched control group the average awarded was $1,396,000. (Andrew Wistrich, Chris Guthrie & Jeffrey Rachlinski, Can Judges Ignore Inadmissible Information? The Difficulty of Deliberately Disregarding, 153 U. Pa. L. Rev. 1251, 1288–91, quote at 1289 (2005).)

Like primes, once an anchor is activated, it’s hard to let go. Once we have the information, it’s hard to forget even if we know we should.

Other Aspects of Bias Literacy

Defining implicit bias and recognizing its manifestations are part of bias literacy. Two other concepts are part of this literacy: groups and micromessaging.

Ingroups and Outgroups

Implicit bias relates directly to groups, ingroups and outgroups. We are all part of our cultural groups, influenced by cultural values. (Criminal Justice Sec., Am. Bar Ass’n, Building Community Trust: Improving Cross-Cultural Communication in the Criminal Justice System, Model Curriculum and Instruction Manual (2010).) Generally speaking, we prefer our own. This is true from a very early age. Many of the characteristics we considered earlier that trigger implicit bias align with group loyalty. We may form a group based on very little, but once part of a group we are all in, we show our social bias.+ We are prone to individualize members of our ingroup, to associate with them, to do good for them, and to attribute things positively to them. (Neha Mahajan & Karen Wynn, Origins of “Us” versus “Them”: Prelinguistic Infants Prefer Similar Others, 124 Cognition 227 (2012).+) Conversely, we see (stereotype) members of the outgroup as more homogenous, we do less good for them, we evaluate them more negatively, and we tend to misattribute their behavior. The latter is what psychologist Dr. Tom Pettigrew has called “the ultimate attribution error.” (Thomas Pettigrew, The Ultimate Attribution Error, 5 Personality & Soc. Psychol. Bull. 461, 464–67 (1979).+) If Jamie (from our ingroup) succeeds, we say she is smart, talented; if Jesse (from the outgroup) succeeds, well, he was an exception and just lucky.

Dr. Greenwald, mentioned above as one of the foundational researchers for the IAT, writing with his colleague Dr. Pettigrew, has said, “Our strong conclusion is that, in present-day America, discrimination results more from helping ingroup members than from harming outgroup members.” (Anthony G. Greenwald & Thomas F. Pettigrew, With Malice Toward None and Charity for Some: Ingroup Favoritism Enables Discrimination, 69 Am. Psychol. 669, 680 (2014).) This is easy to see. I may not be discriminating against someone when I choose to hire someone from my alma mater—I’m helping her—but it’s also the case I am not hiring or helping the others from elsewhere.

To put this in a bit of a criminal context, we consider the series that the New York Times did in 2016 about New York prisons. One part reported parole statistics and procedures. (Michael Schwirtz, Michael Winerip & Robert Gebeloff, For Blacks Facing Parole in New York State, Signs of a Broken System, N.Y. Times, Dec. 4, 2016.) At the time, most of the commissioners were white and from upstate New York. As described, the parole process typically involved very little if any interaction between the decision makers and the prisoners. But in this case reported by the New York Times, Commissioner W. William Smith, white and from upstate New York, engaged in a conversation with inmate Matthew Conley, a 27-year-old also white man from Mr. Smith’s home area. They shared knowledge of the town and golf course; they even found they’d both worked at Tickner’s Kayak. Mr. Conley’s parole was granted. We would guess that Commissioner Smith did not set out to give white guys a better chance for parole, but implicitly this group identity likely would have contributed to the result. What’s significant here is to recall that for every ingroup, there is an outgroup; for every mainstream, a margin.

Another significant fact in implicit bias and group dynamics is the power of labels. For children in school, think about what is triggered from the prime and schemas for troublemaker or from a good family. Think about what is triggered by the label thug or an appearance in an orange jumpsuit, compared to star student athlete. For that matter, think about Emily and Greg compared to Lakisha and Jamal. We know from the economists’ research that Greg will get eight times more interviews than Jamal, likely because reviewers implicitly just go by black-sounding names (and the gender data are similar). (Marianne Bertrand & Sendhil Mullainathan, Are Emily and Greg More Employable Than Lakisha and Jamal?, 94 Am. Econ. Rev. 991, 991 (2004).)

Micromessaging

Together with implicit bias and group dynamics, becoming bias literate includes understanding a third piece of implicit behavior, micromessaging. As the word suggests, these are small messages. We discuss them here because they are most often not consciously sent, not necessarily intended by the sender. They may well start from the same prompts as trigger other implicit biases. Common examples are addressing the men as Mr. Smith, but the women by their first names, calling someone by the wrong name, or not bothering to ask or learn the name of some participants while asking others.+ These are not the types of remarks that are likely to rise to the level of a lawsuit, but they are present, and they are sending messages. More importantly, they are cumulative.+

Strategies for Disrupting Implicit Bias/Response

Strategies for disrupting implicit responses are the subject for the next piece in this series. But before leaving this one, we want to reset the context. In their foundational work on implicit response, Drs. Banaji and Greenwald observed, “The identifying feature of implicit cognition is that past experience influences judgment in a fashion not introspectively known by the actor.” Since they wrote this in 1995, their research and that of their colleagues has exploded. Much has been learned. More recently, Dr. Banaji, writing with Dr. Calvin Lai, observed:

The human mind is limited in its ability to grant equally the virtues of freedom, opportunity, and fairness. This limitation must be recognized in order to enter into any discussion of creating a better society. Evidence of implicit bias has raised the bar on the challenges faced by modern democracies consisting of a plurality of social groups with differing historiesw, power, and potential futures. Understanding that discrimination is possible without an intention to harm is difficult to grasp and even harder to solve given the presence of legal systems founded on the idea of intent as pivotal in determining justice.

However, recent discoveries on the possibility of addressing the pernicious consequences of implicit bias show that what may seem to be inevitable effects of implicit bias need not be so. The research we have reviewed show individual minds to be sensitive to change given the right inputs. We hope that this approach to securing positive social change can aid in the project of safeguarding diverse societies.

(Calvin K. Lai & Mahzarin R. Banaji, The Psychology of Implicit Intergroup Bias and the Prospect of Change 20 (2018). We hope so as well.

Implicit Bias

Although much research on implicit bias has focused on Black/White, implicit bias is relevant to all groups. We ask that you read from this perspective of relevance to all groups that may find themselves stigmatized in a particular situation including those defined by socioeconomics, religion, LGBTQ, gender, disability, age, or other social groupings. Please also keep in mind the intensified aspects and burdens of intersectionality where social groups exist in combination. The scope of this article limits separate discussion on some of these points, but for more specifics on any point/group, be in touch with the authors.

Entity:
Topic:

Bernice B. Donald is a judge on the US Court of Appeals for the Sixth Circuit and a former chair of the Criminal Justice Section.

Sarah Redfield is professor emerita at the University of New Hampshire and co-chair of the CJS Implicit Bias Initiative with Judge Donald.