chevron-down Created with Sketch Beta.
August 01, 2018 Feature

Pretrial Risk Assessments: A Practical Guide for Judges

By David G. Robinson, Hannah Jane Sassaman, and Megan Stevenson

From Alaska to Utah, Missouri to New Hampshire, jurisdictions are introducing risk assessment algorithms into pretrial decision making. As stakeholders try to balance public concern about safety with an effort to reduce or end the use of cash bail—widely understood to disadvantage poor people who can’t pay to secure their liberty pretrial—jurisdictions have turned to algorithmic risk assessment to help determine who can “safely” be released.

While there are robust debates about whether jurisdictions should use such tools at all—and, if so, what procedures should be implemented to ensure that they are being used in ways that demonstrably reduce the inequities of the systems they replace—this article will not address such issues.1 Instead, given that pretrial risk assessments are already widely used, we offer some thoughts on how to use them carefully. Our goal is to provide useful, practical information to the many working judges who are tasked with using risk assessment algorithms.

We are a community organizer working on mass incarceration, an economist and law professor, and an analyst and scholar who counsels jurisdictions on bail policy. Each of us has deeply studied algorithmic risk assessment, and our views have converged on the advice we offer here. We hope to help you understand the opportunities and limits of these tools as you work to deliver justice and nurture the public’s trust.

With Any Tool, Begin by Asking, “Risk of What?”

Interpreting a risk assessment instrument requires intrepid attention to detail. Each risk assessment tool estimates the likelihood of a precisely defined event (or a precisely defined range of events) happening within a specific time. It sometimes takes a little research to understand exactly what these parameters are—which is to say, what the predictions of a risk assessment tool actually mean.

Some tools make a combined prediction of any type of pretrial failure: rearrest for a new offense, failure to appear in court, or violation of pretrial conditions. Others are more specific: They predict only failure to appear or rearrest for a new alleged offense. Some may predict the risk of rearrest for a “violent” offense—subject to a specific definition of violence. These outcomes are all very different in nature and greatly change the interpretation of the “risk.” In short, risks don’t just have magnitudes—they also have flavors.

In many instances, a tool’s judgment that a particular defendant is “high risk” does not mean that the person is a flight risk or a serious danger to the community. The majority of rearrests are for misdemeanor offenses and those who fail to appear are usually easily located; most didn’t abscond from the jurisdiction. Furthermore, many low-level misdemeanor arrests are the product of discretionary law-enforcement decisions. A person’s likelihood of future arrest is a product of not only the person’s actual behavior but also a variety of circumstantial factors, including the level of law-enforcement presence in a given location and police attitudes toward that person as compared with others in the same community.

Be sure to understand what your risk assessment tool is predicting because different types of risk merit different types of response.

“High Risk” May Be Safer Than You Think

Most of today’s pretrial risk assessment instruments do not directly show their probability estimates to judges. Instead, the raw numbers are translated into labels. For example, an accused person might be labeled as “low,” “moderate,” “moderate-high,” or “high” risk, or be assigned to one of the groups along a six-point scale. The tool often is integrated with a “decision-making framework” that proposes a course of action depending on the label a person is given, such as suggesting detention for those deemed high risk.

What statistical probability does the “high risk” label correspond to in your jurisdiction? It may be lower than you think. In a recent study, researchers found that people grossly overestimated the recidivism rate for defendants who were rated “moderate-high” or “high” risk.2 In fact, the true recidivism rate for those in the moderate-high risk category was less than half of what the study respondents thought that it was.

For both the COMPAS and the Arnold Foundation’s PSA (two common risk assessment tools), those with the highest-risk label have only about an 8 percent chance of being arrested for a new violent crime within roughly six months.3 Across a variety of risk assessment tools, the statistical likelihood of being arrested pretrial or within six months for any new offense (a category that can include traffic offenses or failures to appear) ranges from 10 to 42 percent.4 In other words, the majority—or even the large majority—of those with the high-risk label will not be arrested for new offenses while on pretrial release, let alone any serious ones.

People have varying opinions about what level of statistical risk merits detention, monitoring, or other restrictions on liberty. These are moral choices, and they are sometimes incorporated into risk assessment instruments without careful attention. The appropriate response to someone with a given risk level depends, of course, on what type of risk is being measured. A 20 percent chance of any type of violation is different from a 20 percent chance of being arrested for a new serious violent offense, which is different from a 20 percent chance of failing to appear in court. If your jurisdiction uses a tool that conflates multiple types of risk, then the resulting scores are difficult to interpret and therefore more difficult to use.

Statistical risk also depends on the time horizon of predictions. Imagine two defendants in neighboring jurisdictions who face the same charge and who are equally likely to appear at any future court date. If the two jurisdictions are alike except that the latter jurisdiction has a backlogged docket—so that cases take longer to dispose of and involve more court appearances over a longer period—then the latter defendant may be “higher risk” because he or she is more likely to miss at least one appearance.

You can find out what statistical risk corresponds to each risk classification level in your jurisdiction by looking at the validation report. A validation report is a study that was conducted to verify that a risk assessment tool is predictive in a particular place and time. If there is no recent validation report conducted in your jurisdiction, you should ask for one: If crime rates or other key facts have changed, then the numbers from an outdated validation study may no longer hold true in your courtroom. It’s important to know how “risky” a “high-risk” person truly is in order to make informed decisions.

What Goes into a Risk Assessment Tool?

Imperfect Measures of Past Criminal Behavior

Some of the most common inputs to risk assessment tools are criminal record variables: prior arrests, prior convictions, prior incarceration, and so forth. The criminal record, however, is an imperfect measure of actual crimes committed. The data that train risk assessment algorithms come from a system of criminal justice that is not equitable in its practice across lines of race and poverty. The likelihood of arrest, conviction, and incarceration is influenced by what neighborhood you grow up in, what your skin color is, and what kind of lawyer you can afford. Nonwhite people are more likely to face hostility from the police,5 and more likely be arrested,6 than similarly situated whites. Those who can’t afford bail are more likely to plead guilty (resulting, at times, in lengthy sentences) than those who can afford to post bond.7 Black people are less likely to have their charges dropped, dismissed, or reduced,8 and when sentenced to incarceration, they receive harsher sentences than similarly situated whites.9

As human beings, we know that having two prior convictions means something very different if you’re a banker who lives on the Upper West Side than if you’re a supermarket cashier from the Bronx. However, the risk assessment algorithm cannot distinguish between the two. Because the inputs to the algorithm reflect a race-and-class-biased world, the resulting risk scores reflect that bias.

There is no easy fix to this issue. We’ll never know the actual criminal history; all we have are imperfect measures. However, it’s important to be aware that the risk score reproduces race and class disparities and to make decisions with these ongoing inequities in mind.

Social and Economic Factors

Risk assessment tools also can include socioeconomic measures like employment status, unstable housing, or education level. The criminal justice system generally approaches such factors with caution; increasing restrictions on liberty for people because of economic disadvantage raises both ethical and legal concerns.10 For this reason, some risk assessment tools, such as the PSA, do not include socioeconomic markers at all. However, if your jurisdiction uses a tool that includes socioeconomic markers as inputs, it’s important to understand why they are included and how to interpret the resulting scores.

The risk of arrest for a new low-level offense, or of missing a court date, can be the direct product of social and economic disadvantage. For example, those who fall behind on paying parking tickets or other financial obligations can lose their driver’s license. Driving on a suspended license is one of the most common misdemeanor arrest categories in certain jurisdictions.11 Furthermore, those who live in poverty may be at the mercy of public transportation each time they attempt to return to court. Those with limited educational credentials, if employed, are more likely to be in an entry-level service-sector job and may struggle to obtain permission to make weekday court appearances. It would not be surprising that social and economic markers can be used to anticipate these risks.

Here again, understanding the character or flavor of a risk is at least as important as understanding its magnitude. When a risk assessment uses markers of social disadvantage to predict the vagaries of social disadvantage, the nature of the risk may imply that the person should not be jailed or given burdensome conditions because the risk he or she poses is neither a threat to public safety nor a risk of flight from the jurisdiction. It is especially important to identify the impact that poverty has on the risk score if your jurisdiction uses money bail as a condition of release. Increasing the bail amount because someone is poor (and thus scored as higher risk) is likely to result in de facto detention because the defendant can’t pay bail. This is a waste of taxpayer resources and can result in constitutional challenges and civil rights litigation.12

A Heavy Dose of Age: Defendants Can Be “High Risk” Because They Are Young

It’s long been known that the crime rate is, on average, higher among young people than older people. For this reason, age is one of the most common factors in a risk assessment instrument and is often very heavily weighted. Age alone explains almost 60 percent of the variation in COMPAS’s Violent Recidivism Risk Score.13 Being under 23 adds as many points to the PSA’s New Criminal Activity risk score as having three or more prior violent felony convictions.14

Imagine you have two defendants in front of you, and both are labeled “high risk.” One has this label because she has an extensive record of convictions for serious crimes. The other has this label because she is 19 years old and has a prior marijuana arrest. Statistically speaking, the two defendants actually may have the same level of risk—but their circumstances and histories are very different. Teenagers and young adults are in a different developmental stage than older adults; they may be more susceptible to peer influence, have limited ability to appreciate long-run consequences, and may have higher capacity to rehabilitate.15 Thus, the appropriate pretrial decision for the teenager may be different from the appropriate decision for the serial offender, even where their risk numbers are the same.

Ask your pretrial officer (or whoever calculates the risk assessment) to let you know what the most important factors are in a particular defendant’s risk score. In other words, instead of just learning that a defendant is “high risk,” ask the pretrial officer to tell you, “this defendant is high risk, and the three most important factors in her score are X, Y, and Z.” If this is not possible, try to learn the factors and the weights in your risk assessment score so you can evaluate this issue yourself. “High risk” can mean different things depending on the defendant, and the more you understand the label, the better decisions you can make.


Jurisdictions choosing to implement risk assessments are doing so for urgent reasons: to protect the rights of accused people at a sensitive and consequential point in their adjudication, while also trying to keep communities safe and the justice system functioning. No algorithm can tell the entire story about an individual or explain why some people will go on to commit crime in the future and others will not. No risk assessment tool is perfect—yet, as a judge directed to use one of them, you can use them in ways that maximize fairness and justice. Doing so requires a careful understanding of what the tool is measuring and how that might differ across race, class, and age. We hope that this primer will help you in that process.

Despite America’s long history of trying to make criminal justice more “scientific,”16 risk assessment tools continue to embody the ambiguities and complexities of real life—as well as racial and economic disparities. Some advocates argue that the disparities embedded in risk assessment tools mean that such tools have no rightful place in pretrial justice, and we hope that this article has empowered you to understand that view. Other participants in the debate imagine that tools can be helpful if used conscientiously, or in a manner that enables more efficient use of limited court resources. For instance, risk assessment tools can provide a first screening by which to identify a large group of defendants for immediate and automatic release, as is done in Kentucky.17 By enabling the release of a large swath of defendants without a bail hearing, risk assessments restore room for the important work of judges to conduct a substantial, in-depth hearing to identify those defendants who might truly pose an identifiable risk to an individual person or to the community.

Judges wrestle with extraordinary challenges in pretrial decision making. As a judge who will use these tools with living people before you—and with the safety of the community in mind—you have both the opportunity and the need to reflect on how these tools inform your decision making. As your experience with pretrial tools builds up over months and years, we urge you to communicate this wisdom to your jurisdiction, to other criminal justice partners, to your community’s elected decision makers, to independent researchers, and to civil society. Understanding your experiences will be particularly valuable for those who have rarely been present in these conversations before. Your experiences with risk assessment tools will define and determine their impact.

Judicial leadership is an essential part of a growing national conversation about the real-world impact of algorithmic decision making in criminal justice. As practitioners using pretrial risk assessment tools, you have a unique voice. Far from being replaced by machines, your expertise, judgment, and careful attention are needed now, more urgently than ever before. n


1. Each of us has written extensively about what role, if any, algorithmic risk assessment should play in pretrial decision making. See, e.g., Hannah Sassaman, Artificial Intelligence Is Racist Yet Computer Algorithms Are Deciding Who Goes to Prison, Newsweek (Jan. 24, 2018),; John Logan Koepke & David G. Robinson, Danger Ahead: Risk Assessment and the Future of Bail Reform, 93 Wash. L. Rev. (forthcoming, 2018); Megan Stevenson, Assessing Risk Assessment in Action, 103 Minn. L. Rev. (forthcoming, 2018).

2. Daniel A., Krauss, Gabriel I. Cook & Lukas Klapatch, Risk Assessment Communication Difficulties: An Empirical Examination of the Effects of Categorical Versus Probabilistic Risk Communication in Sexually Violent Predator Decisions, Behavioral Sciences & Law (forthcoming, 2018).

3. Sandra Mayson, Dangerous Defendants, 127 Yale L.J. 490, 514 (2018).

4. Id.

5. Rob Vogt et al., Language from Police Body Camera Footage Shows Racial Disparities in Officer Respect, 114 Proc. Nat’l Acad. Sci. 6521, 6521 (2017) (“We find that officers speak with consistently less respect toward black versus white community members, even after controlling for the race of the officer, the severity of the infraction, the location of the stop, and the outcome of the stop.”).

6. See, e.g., Brad Heath, Racial Gap in U.S. Arrest Rates: ‘Staggering Disparity,’ USA Today (Nov. 18, 2014), (“Blacks are more likely than others to be arrested in almost every city for almost every type of crime.”); Drug Policy Alliance, from Prohibition to Progress: A Status Report on Marijuana Legalization (Jan. 2018), available at (showing that after Colorado legalized marijuana, arrests for white people decreased by 51 percent; arrests for Latin people, however, decreased by only 33 percent; and arrests for black people decreased by only 25 percent).

7. Kristian Lum & Mike Baiocchi, The Causal Impact of Bail on Case Outcomes for Indigent Defendants, Proc. of 4th Workshop on Fairness, Accountability & Transparency in Mach. Learning 1, 4 (Aug. 2017), (“We find a strong causal relationship between setting bail and the outcome of a case. . . . [F]or cases for which different judges could come to different decisions regarding whether bail should be set, setting bail results in a 34 percent increase in the chances that they will be found guilty.”) See also Emily Leslie & Nolan G. Pope, The Unintended Impact of Pretrial Detention on Case Outcomes: Evidence from NYC Arraignments, 60 J.L. & Econ. 529 (2017); Megan Stevenson, Distortion of Justice: How Inability to Pay Bail Affects Case Outcomes, SSRN Elec. J., Jan. 12, 2017,

8. Carlos Berdejó, Criminalizing Race: Racial Disparities in Plea Bargaining, 59 B.C. L. Rev. 1187 (2018) (finding in Wisconsin state courts that “[w]hite defendants are twenty-five percent more likely than black defendants to have their principal initial charge dropped or reduced to a lesser crime,” making whites who face felony charges less likely to be convicted of felonies, and that “white defendants initially charged with misdemeanors are more likely than black defendants either to be convicted for crimes carrying no possible incarceration, or not to be convicted at all,” while noting that plea bargaining patterns are similar across races for the most serious crimes).

9. U.S. Sentencing Comm’n, Demographic Differences in Sentencing: An Update to the 2012 Booker Report 2 (Nov. 2017) (finding that from 2012 to 2016, “Black male offenders received sentences on average 19.1 percent longer than similarly situated White male offenders”); Jill K. Doerner & Stephen Demuth, The Independent and Joint Effects of Race/Ethnicity, Gender, and Age on Sentencing Outcomes in U.S. Federal Courts, 27 Just. Q. 1 (2010) (“We find that Hispanics and blacks, males, and younger defendants receive harsher sentences than whites, females, and older defendants after controlling for important legal and contextual factors.”).

10. See, e.g., Sonja B. Starr, Evidence-Based Sentencing and the Scientific Rationalization of Discrimination, 66 Stan. L. Rev. 803 (2014).

11. Nat’l Ass’n of Crim. Def. Laws., Minor Crimes, Massive Waste: The Terrible Toll of America’s Broken Misdemeanor Courts 26 (Apr. 2009).

12. See, e.g., Bob Egelko, Court Ruling Could Change State’s Approach to Bail, S.F. Chron. (Jan. 25, 2018),

13. Christopher Slobogin & Megan Stevenson, Algorithmic Risk Assessment and the Double-Edged Sword of Youth 14 (Unpublished Working Paper, 2018).

14. Id. at 19–20.

15. Id. at 5–9.

16. Note, Bail Reform and Risk Assessment: The Cautionary Tale of Federal Sentencing, 131 Harv. L. Rev. 1125 (2018).

17. An “administrative release” program was adopted statewide in Kentucky on January 1, 2017, whereby most nonsexual, nonviolent, non-DUI misdemeanants who fall outside of the high-risk category are released on their own recognizance after booking and before arraignment. B. Scott West, The Next Step in Pretrial Release Is Here: The Administrative Release Program, The Advocate, Jan. 2017, at 1,

The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

David G. Robinson

David G. Robinson is managing director at Upturn and an adjunct professor at Georgetown University Law Center. He is also a co-director of the MacArthur Foundation’s Pretrial Risk Management Project. He can be reached at [email protected] or @dgrobinson on Twitter. 

Hannah Jane Sassaman

Hannah Jane Sassaman is the policy director at the Philadelphia-based community organizing nonprofit Media Mobilizing Project ( She is a current Soros Justice Fellow working to help communities engage with risk assessments when included in pretrial decision making. She can be reached at [email protected] or @hannahsassaman on Twitter.

Megan Stevenson

Megan Stevenson is an assistant professor of law at George Mason University. She is also an economist and legal scholar who has done extensive research on risk assessments and the pretrial system. She can be reached at [email protected] or @MeganTStevenson on Twitter.