Criminal Justice Section  

   Welcome

Criminal Justice Magazine
Spring 2002
Volume 17 Issue 1

Scientific Evidence

Paul C. Giannelli

Fingerprints Challenged!

Well, it finally happened. A successful challenge to the admissibility of fingerprint evidence. In United States v. Plaza, 179 F. Supp. 2d 492 (E.D. Pa. 2000), Judge Pollak held that a fingerprint expert could not give an opinion that two sets of fingerprints "matched"—that is, a positive identification to the exclusion of all other persons. And then he reversed himself. (2002 WL 389163 (E.D. Pa. Mar. 13, 2002).) "In short, I have changed my mind." When was the last time you saw those words in a judicial opinion? Never! Note that the first opinion was based on the stipulated record made in a prior case, United States v. Mitchell, two years earlier; live witnesses testified in the second trial.

Both Plaza opinions are significant. Fingerprints are the gold standard of forensic science. They are considered the most reliable type of evidence because they are considered unique and do not change over time. It is not surprising that proponents of new scientific techniques often attempt to invoke a favorable comparison to fingerprints—for example, "voiceprints" and DNA "fingerprinting." Similarly, firearms identification has been described as a "ballistic fingerprint" and neutron activation analysis as a "nuclear fingerprint." Fiber evidence has been touted as "nearly" as valuable as fingerprint evidence. (Actually, not by a long shot.) In drug analysis, a molecule’s infrared spectrophotometric spectrum has sometimes been referred to as its "fingerprint." In one of Britain’s IRA cases, a prosecutor incorrectly stated that certain tests for bomb residues "were like fingerprints." (Sir John May, Interim Report on the Maguire Case at 25 (July 9, 1990).) Perhaps the most far-fetched comparison occurred during an evidentiary hearing in the Mike Tyson trial. The prosecutor argued that "state of mind is like fingerprint evidence." ( Cleveland Plain Dealer, Feb. 2, 1992, at 6D). (Hard to believe a grown man said that.) In just about every instance the fingerprint comparison is more misleading than helpful. None of these techniques is as unique as fingerprints. And now fingerprints may not be as good as "fingerprints"!

The issue is not simply whether or not people have distinctive fingerprints. (Judge Pollak took judicial notice of the uniqueness and permanence of fingerprints in Plaza I.) The question is the reliability of findings of matches given the size and clarity of the latent print found at the crime scene. Latent prints are usually about 20 percent the size of rolled prints and subject to much distortion. Even if individuals have unique fingerprints, the process of determining matches may be unreliable. In Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993), the Supreme Court required judges to scrutinize the reliability of scientific evidence. In Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999), the Court extended the reliability requirement to nonscientific expert testimony. As one district court has stated, the Supreme Court in Kumho was "plainly inviting a reexamination even of ‘generally accepted’ venerable, technical fields." ( United States v. Hines, 55 F. Supp. 2d 62, 67 (D. Mass. 1999) (limiting handwriting comparison testimony).)

In describing the trial judge’s screening or "gatekeeping function," the Daubert Court identified a number of factors. First, in evaluating reliability, a judge should determine whether the scientific theory or technique can be and has been tested. Second, whether a theory or technique has been subjected to peer review and publication is a relevant consideration. Third, a technique’s known or potential rate of error is a pertinent factor. Fourth, the existence and maintenance of standards controlling the technique’s operation are other indicia of trustworthiness. Finally, "general acceptance" remains an important consideration.

Judge Pollak’s analysis of these factors is what makes Plaza I such a compelling case and he did not change his interpretation of Daubert in Plaza II, only its application. This is perhaps best understood by comparing Plaza I to United States v. Havvard,117 F. Supp. 2d 848 (S.D. Ind. 2000), aff’d, 260 F.3d 597 (7th Cir. 2001), a prior fingerprint case, in which the court held that fingerprint evidence satisfied the Daubert- Kumho reliability test. Indeed, that court described fingerprint expertise as "the very archetype of reliable expert testimony under those standards." ( Id. at 855.)

Havvard found that latent print identification had been "tested" for nearly 100 years in adversarial proceedings with the highest possible stakes—liberty and sometimes life. In contrast, Plaza I pointed out that Daubert requires scientific, not judicial, testing:

"[A]dversarial" testing in court is not, however, what the Supreme Court meant when it discussed testing as an admissibility factor. . . . It makes sense to rely on scientific testing, rather than "adversarial" courtroom testing, because to rely on the latter would be to vitiate the gatekeeping role of federal trial judges, thereby undermining the essence of Rule 702 as interpreted by the Court in Daubert. If "adversarial" testing were the benchmark—that is if the validity of a technique were submitted to the jury in each instance—then the preliminary role of the judge in determining the scientific validity of a technique would never come into play. Thus, even 100 years of "adversarial" testing in court cannot substitute for scientific testing when the proposed expert testimony is presented as scientific in nature.

This is the crux of the opinion, and Judge Pollak got it right. Daubert- Kumho requires scientific testing. The third case in the Daubert trilogy, General Electric Co. v. Joiner, 522 U.S. 136 (1997), illustrates this point. That case involved a toxic tort issue—whether PCBs caused small cell lung cancer. The Court examined epidemiological and animal studies. The focus was on science. The judge reaffirmed this analysis in Plaza II.

Next, in citing to "peer review," Havvard noted that a second qualified fingerprint examiner can compare the prints: "In fact, peer review is the standard operating procedure among latent print examiners." This shows a fundamental misconception of peer review as used in Daubert. Peer review means "refereed scientific journals." It is a screening mechanism and only the first step, followed by publication and then replication by other scientists. Review by a second fingerprint expert is a type of quality control; it is not peer review in the scientific community. This is especially a problem in forensic science because, as Judge Pollak noted in Plaza I: "Even those who stand at the top of the fingerprint identification field—people like David Ashbaugh and Stephen Meagher [FBI]—tend to be skilled professionals who have learned their craft on the job and without any concomitant advanced academic training. It would thus be a misnomer to call fingerprint examiners a ‘scientific community’ in the Daubert sense." The fact that there are articles on fingerprints in forensic literature does not necessarily mean that any article reported the results of scientific testing of the technique.

In Havvard, the court simply accepted the prosecution expert’s statement that the "error rate for the method is zero." However, the expert conceded that the ultimate judgment is subjective and that there is no minimum number of points required. In probing this issue, Judge Pollak wanted to discover the "practitioner error rate," because there is no fingerprint examination without a fingerprint examiner, and some proficiency examinations on fingerprints were troublesome. "In 1995 fewer than half of the 156 participating examiners—44%—correctly identified all five latent prints that were tested, while 31% of the examiners made erroneous identifications."

In Plaza II the prosecution offered evidence of the FBI’s proficiency testing. There was one "false positive" in 16 external tests taken by supervisory examiners. "The internal tests taken over the seven years numbered 431. These tests generated three errors, two in 1995 and one in 2000. Each of these three errors was a missed identification, i.e., a failure by the test taker to find a match between a latent print and a known examplar which in fact existed; such an error is a ‘false negative’ which, being mistakenly exculpatory, is regarded by the FBI as considerably less serious than a false positive. In sum, the 447 proficiency tests administered in seven years from 1995 through 2001 yielded four errors—a proficiency rate of just under 1%."

However, defense experts were highly critical of the way the tests were conducted. Allan Bayle, a fingerprint examiner who has worked for 25 years at New Scotland Yard and was a Fellow of the U.K. Fingerprint Society, testified that the FBI tests were too easy. "It’s not testing their ability. It doesn’t test their expertise. I mean, I’ve sent these tests to trainees and advanced technicians. And if I gave my experts these tests, they’d fall about laughing." (Insightful comment by author: This is not good for the prosecution.) Nevertheless, Bayle opined that one of the latent prints in Plaza was an easier match due to its clarity, than those on the proficiency examinations would be.

Two other defense experts were not fingerprint examiners but experts on testing (psychometrics.) "They were highly critical of the FBI proficiency tests. The test materials and uninformative attendant literature, taken together with the ambiguity as to the conditions governing the taking of tests (e.g., may the tests takers consult with one another? To what extent is taking the test perceived to be competitive with or subordinate to the performance of concurrent work assignments?), gave few clues as to what the test makers intended to measure." Both experts believed that the "stratospheric" FBI test success rate "was hardly reassuring; to the contrary, it raised ‘red flags’."

Another Daubert factor—governing standards—also proved problematic. Here Judge Pollak changed positions based on information gleaned from the rehearing. He found that there was fundamental uniformity in the standards used by the FBI and Scotland Yard. He also considered a British decision, Regina v. Buckley, 143 SJ LB 159 (1999), which examined British fingerprint standards in detail, and parliamentary discussions on the issue in the House of Lords.

In the end, Judge Pollak concluded, on the record before him "that there is no evidence that certified FBI fingerprint examiners present erroneous identification testimony, and . . . that there is no evidence that the rate of error of certified FBI fingerprint examiners is unacceptably high. With those findings in mind, I am not persuaded that courts should defer admission of testimony with respect to fingerprinting . . . until academic investigators financed by the National Institute of Justice have made substantial headway on a ‘verification and validation’ research agenda. For the National Institute of Justice, or other institutions both public and private, to sponsor such research would be all to the good. But to postpone present in-court utilization of the ‘bedrock forensic indentifier’ pending such research would be to make the best enemy of the good."

Conclusion

In its interpretation of Daubert, Plaza I is a well-written opinion. Havvard is not. Plaza II did not alter this interpretation. The judge’s application of the Daubert- Kumho standard to fingerprint evidence is a different question—one on which there will assuredly be continued debate.

In any event, Plaza II is not a ringing endorsement. The judge recognized that the basic science ( i.e. empirical testing) is missing. Indeed, he invited the National Institute of Justice (NIJ) to sponsor such research. Actually, in March 2000, the NIJ had released a solicitation for "Forensic Friction Ridge (Fingerprint) Examination—Validation Studies." The introduction to the solicitation states that Daubert "require[s] scientists to address the reliability and validity of the methods used in their analysis. Therefore, the purpose of this solicitation is to . . . provide greater scientific foundation for forensic friction ridge (fingerprint) identification." The legal community awaits the results of this solicitation.

Moreover, the FBI’s proficiency testing appears to be embarrassingly inadequate. At least, that is my interpretation when reading between the lines of the Scotland Yard expert’s comment that "they’d fall about laughing."

Finally, several articles and books have questioned the lack of empirical support for fingerprint comparisons. ( See Cole, Suspect Identities: A History of Fingerprinting and Criminal Identification (2001); Mnookin, Fingerprint Evidence in an Age of DNA Profiling, 67 Brooklyn L. Rev. 13 (2001); Saks, Merlin and Solomon: Lessons from the Law’s Formative Encounters with Forensic Identification Science, 49 Hasting L.J. 1069 (1998); Stoney, Fingerprint Identification, Mod. Sci. Evid., ch. 27 (2d ed. 2002).)

 

Paul C. Giannelli is the Albert J. Weatherhead III & Richard W. Weatherhead Professor of Law, Case Western Reserve University. He is coauthor of Scientific Evidence (Lexis 3d ed. 1999) and writes on expert testimony and forensic science. He is also a member of the Criminal Justice magazine editorial board.

 


Return to Table of Contents - Spring 2002

Return to Criminal Justice magazine home page