chevron-down Created with Sketch Beta.
March 01, 2017

Expert Evidence Post-Daubert: The Good, the Bad, and the Ugly

The Daubert trilogy requires scientists to be scientists first and expert witnesses second, but how are judges and lawyers to evaluate the testimony of scientists?

Demosthenes Lorandos

Download a printable PDF of this article.

Bite mark analysis, bullet lead comparison, and the myth of fingerprints. Have you read the latest report from the President’s Council of Advisors on Science and Technology? Are you ready to tackle the cargo cult scientists? Your client’s life may depend on it.

The Good

The trilogy of Daubert, Joiner, and Kumho revolutionized expert evidence law. Since Daubert’s dénouement, trial judges are required to apply the criteria of validity and reliability to all proposed expert testimony. In this sweeping new doctrine, the Supreme Court required that experts’ opinions must have a reliable basis in the knowledge and experience of the experts’ discipline. The Court instructed trial judges to make a preliminary assessment of whether the reasoning or methodology underlying proposed testimony is scientifically valid, and of whether that reasoning or methodology can properly be applied to the facts in issue. In a case involving scientific evidence, the Court instructed that evidentiary reliability must be based on scientific validity.

The Daubert trilogy’s key concepts distill into a set of 11 reliability criteria applicable to all expert testimony in federal courts and in state courts that have adopted the Daubert standard:

Reasoning – Methodology – Validity – Reliability – Empiricism – Hypothesis Generation – Falsification – Testing – No More Ipse Dixit –Validity of Qualifications – Intellectual Rigor


These reliability criteria exhort scientists to do good science and require them to be scientists first and expert witnesses second. Following Daubert, appellate courts insisted that their trial courts must create sufficient records so that the basis for admissibility decisions can be reviewed. Every step in the proffered expert’s procedure has to be evaluated because any step that renders the process unreliable thereby renders the expert’s testimony inadmissible. This is true whether the step completely changes a reliable methodology or merely misapplies that methodology.

In practice, the mechanism for the preliminary assessment of proffered experts and their testimony came to be called a Daubert hearing. The rule basis is Federal Rule of Evidence 104(a), which calls for preliminary questions concerning the qualification of a person to be an expert witness, or the admissibility of evidence, to be determined by the court. In a growing number of cases, appellate courts ruled that it is reversible error not to hold a Daubert hearing. E.g., Padillas v. Stork-Gamco, Inc., 186 F.3d 412 (3d Cir. 1999); United States v. Vitek Supply Corp., 144 F.3d 476 (7th Cir. 1998); United States v. Velasquez, 64 F.3d 844, 849–52 (3d Cir. 1995).

The Daubert trilogy does not require trial courts to determine which of several competing scientific theories is most correct. The demand is simply that the proponent of the evidence show that the expert’s conclusion was arrived at in a scientifically sound and methodologically reliable fashion. The proponent is also required to show that the expert’s testimony is grounded in an accepted body of learning or experience in the expert’s field, and the proposed expert must explain how the conclusion is so grounded.

In this way, the trilogy brought scientific culture to the courtroom. Judges must now distinguish between a number of analytically distinct concepts: Is this particular expert qualified? Do the qualifications of this expert fit the facts of this case? What is the scientific validity of the methodology the expert has used? What is the scientific reliability of the methodology used? What is the scientific validity of the underlying data the expert bases his or her opinions on? What is the scientific reliability of the underlying data? To what extent is the expert’s reliance on those data reasonable? In this way, the trilogy was revolutionary.

Some courts naïvely assumed that the trilogy set a lower threshold for admissibility than had the precursor Frye test of general acceptance. The Daubert Court itself realized that scientific assertions that had long been accepted could still be found to be unsound science. 509 U.S. at 593 n.11. We should remember that Ptolemy’s theory that the Earth was the center of the solar system was generally accepted until Copernicus came along. Bloodletting was generally accepted as effective therapy until clinical trials demonstrated the futility of this brutal practice. And how many women did physicians infect with childbed fever because it was generally accepted that “doctors are gentlemen and gentlemen’s hands are clean”? The trilogy was clearly revolutionary, and this revolution moved the tectonic plates of expert jurisprudence in the English-speaking world.

As the ink was drying on the pages of the Supreme Court Reporter, the Canadians held commission after commission into prominent wrongful convictions. The investigative research that emerged from those cases persuaded the Supreme Court of Canada to read “reliability” into Canadian admissibility jurisprudence and to endorse the trilogy criteria. R. v. Mohan, [1994] 2 S.C.R. 9 (Can.). The United Kingdom recently engaged in significant evaluations of expert evidence in England and Wales in Access to Justice: Final Report to the Lord Chancellor on the Civil Justice System in England and Wales (1996) by Harry Woolf and in Expert Evidence in Criminal Proceedings in England and Wales 66–67 (2011) by the House of Commons’ Law Commission (Law Comm’n No. 325). Following their investigations, the courts of England and Wales adopted the trilogy criteria of reliability and insisted that expertise be predicated on sound principles with methodology-based techniques and assumptions and that they be properly applied to the facts of the case. Even the folks down under are beginning to recognize that their expert evidence jurisprudence is built on naïve notions of science. Reformers in Australia are pushing for the adoption of the trilogy’s notions of testability, validity, and reliability.

Initial experience demonstrated that Daubert alone had the effect of excluding evidence that had been admitted previously. Old shibboleths came under attack as the trilogy’s analysis encouraged new questions. Psychologists and legal scholars discovered that exposing witnesses to information about a suspect or a case that is not required for an interpretation, so-called domain irrelevant information, has a strong potential to mislead. Studies found that irrelevant information influences decisions about whether the profile of a suspect appears in a mixed DNA sample. Itiel E. Dror & Greg Hampikian, Subjectivity and Bias in Forensic DNA Mixture Interpretation, 51 Sci. & Just. 204 (2011). Studies found that co-witness manipulation (information that other witnesses have identified the same perpetrator from a lineup), has a significant confidence inflation effect on eyewitness identifications. Carolyn Semmler, Neil Brewer & Gary L. Wells, Effects of Post-Identification Feedback on Eyewitness Identification and Nonidentification Confidence, 89 J. Applied Psych. 334 (2004). Studies found that experienced fingerprint examiners change their minds about whether two fingerprints match when given domain irrelevant information. Itiel Dror, David Charlton & Ailsa Péron, Contextual Information Renders Experts Vulnerable to Making Erroneous Identifications, 156 Forensic Sci. Int’l 74 (2006). And after a witness or analyst has been exposed to extraneous information, there is no way of decontaminating the resulting opinion. In a recent example, the Iowa Supreme Court overturned the conviction in Iowa v. Hillary Lee Tyler, No. 13-0588 (June 30, 2015), because domain irrelevant information was given to the medical examiner.

  • None of the important questions underpinning the research on domain irrelevant information—and many other challenges to bad science as expert evidence—would have made it to court before the trilogy’s revolution.

The Bad

Seven years post-revolution, Justice Breyer opined that “most judges lack the scientific training that might facilitate the evaluation of scientific claims or the evaluation of expert witnesses who make such claims.” Stephen Breyer, Science in the Courtroom, 16 Issues in Sci. & Tech. 52, 53 (2000). Justice Breyer was right. A 2001 study with a survey frame of 9,715 state trial court judges from all 50 states and the District of Columbia concluded that “although the judges surveyed reported that they found the Daubert criteria useful for determining the admissibility of proffered expert evidence, the extent to which judges understand and can properly apply the criteria when assessing the validity and reliability of proffered scientific evidence was questionable at best.” Sophia I. Gatowski et al., Asking the Gatekeepers: A National Survey of Judges on Judging Expert Evidence in a Post-Daubert World, 25 Law & Hum. Behav. 433, 452 (2001).

Are lawyers and judges science illiterates? Federal Administrative Law Judge John C. Holmes explained that judges, lawyers, and scientists come from profoundly different worlds of experience and education. Holmes argued that math and science students gravitate toward careers in chemistry, engineering, medicine, physics, and such, but future lawyers and judges spend much of their time avoiding math and science. For example, in the academic year 1996–97, only 5.3 percent of applicants to law schools had majored in the natural sciences (including 0.5 percent who majored in mathematics) as compared with 32.7 percent who majored in political science, history, or English. Law Sch. Admission Council, National Statistics Report (Mem. No. 98-6) (1998). Judge Holmes concluded that lawyers and judges are not just ignorant of science but are averse to it. John C. Holmes, Book Review, 48 Fed. Law. 68 (2001).

So how is this evidence revolution playing out in the real world of day-to-day judicial decision making? From the bench, the revolution looks suspiciously like an unfunded mandate. Numerous studies have observed little difference in outcomes between Frye and Daubert jurisdictions. Many of the same studies suggest that judges in Daubert jurisdictions do not apply the trilogy’s criteria when making their admissibility decisions but instead fall back on simplistic criteria like qualifications and experience. Researchers reported that it is judges’ sociopolitical attitudes that most influence their judgments about the relevance of scientific evidence. They argue that, as a general proposition, judges disfavor civil plaintiffs and criminal defendants. Many commentators report that courts appear to apply a more lenient standard to forensic scientists working with prosecutors than to experts offered by plaintiffs in civil cases.

Commentators describe this laissez-faire approach to expert admissibility by post-trilogy judges as a culture of acceptance. When the litigants clash over the reliability of proffered scientific testimony, researchers argue, the complete absence of foundational research has not prevented admission of junk science in Daubert jurisdictions. Of particular concern is courts’ general abdication of serious critical review of the non-DNA forensic identification sciences. These fields include the bite mark analysis, bullet lead comparison, hair sample analysis, and so on. These fields are scientific and technological failures. They are not sciences in any way beyond the rhetorical, and their technological competence is unknown because error rates for their application have never been determined.

And what of Wigmore? “Cross-examination is the greatest legal engine ever invented for the discovery of truth.” 5 J. Wigmore, Evidence § 1367, at 32 (J. Chadbourn rev. 1974). What about preliminary hearings, witness voir dire, or special instructions? Research documents that cross-examining advocates do not probe, test, or challenge the underlying basis of an expert’s opinion evidence but instead adopt the simpler approach of trying to undermine the expert’s credibility. Scholars tell us that courts and attorneys overestimate the value of cross-examination in dealing with bad science, and emerging research questions the effectiveness of all of the potential trial safeguards individually and in combination.

And what of the Daubert requirement that experts know and base their inferences on the data of their discipline? What about using the very treatises experts must know and rely on to describe their methodology, databases, and inferences? Research has found that in most reported decisions, the parties and the “experts” do not tend to refer to relevant scientific studies or bring the court’s attention to the existence of critical literature. This was found to occur even when the state’s expert witness knows the data. As a result, lawyers, judges, and juries are often oblivious to relevant literature, alternative techniques, experimental research, and critical commentary that might be directly relevant to the issues confronting the court. It is no wonder that in an extensive survey of expert admissibility determinations in the United States, United Kingdom, Canada, and Australia, researchers concluded that the state of post-trilogy evidence law is deplorable.

The Ugly

Prosecutors and law enforcement personnel know that juries love a good show. CSI: Crime Scene Investigation was one of the most popular television shows of all time. During its 15 years on American television, it was rated the number one watched television show five times. CSI was recognized as the most popular dramatic series internationally by the Festival de Télévision de Monte-Carlo, which awarded the series the International Television Audience Award three times. Yes, juries love a good show, and for the most part, television is where they get their “science.”

When we fess up to the fact that we are living in the Fox News generation, we recognize that critical thinking about the sciences is woefully lacking in our society. As litigators, we must face some sobering facts: For every five hours of cable news, one minute is devoted to science; 46 percent of Americans believe the Earth is less than 10,000 years old; and the number of newspapers with science sections has shrunk by two-thirds in the past 20 years. Richard Hofstadter’s Anti-Intellectualism in American Life (1963) won the Pulitzer Prize in 1964. Charles Freeman’s The Closing of the Western Mind (2002) cogently documents how over time, dogma replaced critical thinking and exploration in our world. Al Gore’s The Assault on Reason (2007), a New York Times best seller, argues that the marketplace of ideas has been slowly corrupted by the politics of fear, secrecy, cronyism, and blind faith. Picking up where Hofstadter’s Anti-Intellectualism left off, Susan Jacoby’s The Age of American Unreason (2008) carefully analyzes “junk thought” in America, tracing it to “a pervasive malaise fostered by the mass media, triumphalist religious fundamentalism, mediocre public education, a dearth of fair-minded public intellectuals, . . . and a lazy and credulous public.” In Unscientific America: How Scientific Illiteracy Threatens Our Future (2009), best-selling author Chris Mooney and scientist Sheril Kirshenbaum argue that religious ideologues, a weak education system, science-phobic politicians, and the corporate media have collaborated to create a dangerous state of science illiteracy.

Is it any wonder that, among evidence scholars, the courts’ handling of forensic evidence in admissibility hearings and trials has been soundly and universally excoriated? In a study of 137 persons who were exonerated by post-conviction DNA testing, researchers found that 60 percent of the cases involved invalid forensic science testimony. Brandon L. Garrett & Peter J. Neufeld, Invalid Forensic Science Testimony and Wrongful Convictions, 95 Va. L. Rev. 1, 14–15 (2009). Researchers also determined that many witnesses giving forensic science evidence are oblivious to, or inadequately trained to deal with, validation, reliability, and the ways in which opinions should be expressed.

Into this quagmire stepped the National Academy of Sciences. Its National Research Council (NRC) authored an exhaustive 2009 report titled Strengthening Forensic Science in the United States: A Path Forward. Part of the NRC’s mandate was to try to understand the value of forensic science and medical evidence and the effectiveness of admissibility standards. According to the report, the amount of actual science in a forensic science method has an important bearing on the reliability of forensic evidence in our courts. The report instructed that there are two questions that should underlie the law’s admission of forensic evidence: (1) the extent to which a particular forensic discipline is founded on a reliable scientific methodology that gives it the capacity to accurately analyze evidence, and (2) the extent to which practitioners in a forensic discipline rely on human interpretation that could be tainted by error, bias, or the absence of sound operational procedures and standards. The authors expressed surprise at the lack of experimental evidence, or even a knowledge base, underpinning many forensic science techniques. The report argued that, with the exception of nuclear DNA analysis, no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between the piece of evidence and a specific individual or source. The authors of the report found that in a number of forensic science disciplines, forensic science professionals have yet to establish either the validity of their approach or the accuracy of their conclusions. Citing this as “serious,” the report found that the courts have been utterly ineffective in addressing this problem. Citing the lack of scientific expertise among lawyers and judges, the authors of the NRC report concluded that the legal system is ill equipped to correct the problems of the forensic science community.

The trilogy revolution has shifted the ground beneath our feet, and we’re stumbling badly. Science and legal scholars tell us we’re ill-equipped and utterly ineffective. But isn’t it the very revolution of the Daubert trilogy that has prompted us to ask the harder questions? Remember domain irrelevant information? That’s the very research that gave us a deeper understanding of how terribly wrong Federal Bureau of Investigation (FBI) fingerprint analysts could be. Remember the hullaballoo over Brandon Mayfield and the Madrid bombing? Mayfield was an Oregon attorney whose fingerprints, according to the FBI, were a “100% verified match” to a print left at the scene of a horrific bombing of the Cercanías commuter train in Madrid. Jailed and held incommunicado, Mayfield finally got in touch with legal counsel and fought back. What did the Office of the Inspector General discover about the FBI fingerprint experts? The overconfidence of the FBI analysts came after the analysts learned that former U.S. Army Lieutenant Mayfield married an Egyptian college professor’s daughter and converted to Islam. This domain irrelevant information seemed to push them past reason. Federal District Judge Ann Aiken determined that the FBI’s fingerprint “match” was largely fabricated and concocted by the FBI and the U.S. Department of Justice. When the dust settled, the FBI paid Mayfield $2 million and apologized.

Another sacred cow became hamburger when a federal court in Illinois excluded the government’s proffered testimony regarding lead comparisons between bullets found in a defendant’s car and the bullets found in the victim’s body. In United States v. Mikos, No. 02-CR-137 (N.D. Ill. Dec. 5, 2003), the court found that the expert’s experience in the field and the FBI’s experience over the previous 30 years with bullet lead comparison lacked scientific methodology and was therefore anecdotal evidence at best. Shortly after this courageous federal judge asked the trilogy questions, the National Academies published a committee report that was critical of bullet lead comparison evidence, Forensic Analysis: Weighing Bullet Lead Evidence (2004). Soon after the report was published, another mea culpa came from the FBI. The bureau announced that it would no longer offer bullet lead comparison evidence in court. Recently, Alabama released Anthony Hinton after 30 years on death row because his conviction was based solely on this same forensic evidence. Alabama was forced to concede that the bullets in question did not actually match the weapon used.

  • The questions the trilogy prompts us to ask closed another forensic smoke-and-mirrors show. Bite mark testimony has been finding its way into evidence since the 1950s. But when the NRC concluded that there was nothing to indicate that courts review bite mark evidence pursuant to Daubert’s standard of reliability, men locked away for decades walked free. For example, Mississippi released two men who had been convicted of separate murders based on testimony that their teeth perfectly matched bite marks on the victims. In one of the cases, after DNA later identified the true killer, experts demonstrated that the wounds were not human bites and were most likely caused by crawfish nibbling on the corpse. Under attack by the very questions the trilogy demands, the FBI and crime labs all over the English-speaking world capitulated again.

Why Does This Happen?

The trilogy aside, why does this keep happening? Researchers argue that there is a systemic pro-prosecution bias on the part of judges and that, regardless of the standard of admissibility, this bias is reflected in admissibility decisions. Donald E. Shelton, Forensic Science Evidence and Judicial Bias in Criminal Cases, 49 Judges’ J. 18 (2010). According to this body of research, at the trial court level, prosecution experts were admitted 95.8 percent of the time, but defense experts were admitted only 7.8 percent of the total number of times they were offered. The ugly reality is that in our courts, prosecutors are the champions of “cargo cult science.”

Nobel Prize–winning physicist Richard Feynman first used the term “cargo cult science” during his 1974 commencement address at the California Institute of Technology. Feynman explained that during World War II, Pacific Islanders watched cargo planes land with loads of food and material. When the war was over and the troops returned home, the islanders wanted the cargo planes to keep returning; so they made runways, stationed a man with wooden headphones and bamboo for antennas, lighted fires, and waited for the planes to land. As Feynman explained it, cargo cult scientists act in the same way: “They follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential because the planes don’t land.”

The obvious consequence of this ugly mess is that in many criminal proceedings, prosecutors put forward and judges aongful convictions, continuing reliance on unreliable and speculative opinions and blind faith in the value of trial safeguards have eroded the social legitimacy of our courtsdmitted incriminating expert opinions that were misleading, exaggerated, or flat-out wrong. In many more cases, unreliable forensic science techniques and misleading interpretations contributed to guilty pleas, and opportunities to expose erroneous assumptions, false confessions, and misleading or grossly exaggerated evidence were lost. In the face of staggering incarceration rates and wr.


The Daubert trilogy worked a revolution in expert evidence law, but it was unfair and unrealistic to expect—Shazam!—that lawyers and judges who couldn’t separate a regression equation from an analysis of variance yesterday would suddenly get it today. Real-world practice in expert evidence law has made it clear that the unfunded mandate to develop scientific literacy has not worked out so well. Remember, seven years post-Daubert, Justice Breyer opined that “most judges lack the scientific training that might facilitate the evaluation of scientific claims or the evaluation of expert witnesses who make such claims.” He was right then and he’s right now, but psychologists have good news for us. Research on methodological reasoning suggests that training in the scientific method can improve individuals’ reasoning abilities. The relevant research demonstrates that with access to the tools of scientific reasoning, judgments concerning validity and reliability dramatically improve. Additional research has demonstrated that subjects given brief training in methodological reasoning provided more scientifically sophisticated answers to real-world problems.

Justice Breyer was right again when he said we need more judicial education. And we need it now more than ever.

Demosthenes Lorandos

The author is of counsel at Lorandos Joshi in Ann Arbor, Michigan.