Volume 19, Number 6
September 2002


TRIAL PRACTICE

FINGERPRINTS CHALLENGED

By Paul C. Giannelli

Well, it finally happened: a successful challenge to the admissibility of fingerprint evidence. In United States v. Llera Plaza, Judge Pollak held that a fingerprint expert could not give an opinion that two sets of fingerprints "matched"-were a positive identification to the exclusion of all other persons. Then, he reversed himself: "In short, I have changed my mind." When was the last time you saw those words in a judicial opinion? The first opinion was based on the stipulated record made in a prior case, United States v. Mitchell, two years earlier; live witnesses testified in the second trial.

Both Plaza opinions are significant. Fingerprints are considered the most reliable type of evidence because they are considered unique and do not change over time. It is not surprising that proponents of new scientific techniques often attempt to invoke a favorable comparison to fingerprints-for example, "voiceprints," "DNA fingerprinting," and "ballistic fingerprints." Fiber evidence has been touted as "nearly" as infallible as fingerprint evidence. In just about every instance, the fingerprint comparison is more misleading than helpful-none of these techniques is as unique as fingerprints. And now even fingerprints themselves may not be.

Reliability questioned. The issue is not whether or not people have distinctive fingerprints but whether findings of a match are reliable given the size and clarity of latent prints found at a crime scene. Latent prints are usually about 20 percent the size of rolled prints and are subject to much distortion. Even if individuals have unique fingerprints, the process of determining matches may be unreliable. In Daubert v. Merrell Dow Pharmaceuticals, Inc., the Supreme Court required judges to scrutinize the reliability of scientific evidence. In Kumho Tire Co. v. Carmichael, the Court extended the reliability requirement to nonscientific expert testimony.

In describing the trial judge's gatekeeping (screening) function, the Daubert court identified a number of factors in evaluating reliability: (1) whether the scientific theory or technique can be and has been tested; (2) whether a theory or technique has been subjected to peer review and publication; (3) a technique's known or potential rate of error; (4) the existence and maintenance of standards controlling the technique's operation; and (5) "general acceptance."

Judge Pollak's analysis of these factors is what makes Plaza I such a compelling case, and he did not change his interpretation of Daubert in Plaza II, only its application. This is perhaps best understood by comparing Plaza I to United States v. Havvard, a fingerprint case in which the court held that fingerprint evidence satisfied the Daubert-Kumho reliability test. Indeed, that court described fingerprint expertise as "the very archetype of reliable expert testimony under those standards."

Havvard found that latent print identification has been "tested" for nearly 100 years in adversarial proceedings with the highest possible stakes-liberty and sometimes life. In contrast, Plaza I pointed out that Daubert requires scientific, not judicial, testing. This is the crux of the opinion, and Judge Pollak got it right. Daubert-Kumho requires scientific testing. The third case in the Daubert trilogy, General Electric Co. v. Joiner, illustrates this point. That case involved a toxic tort issue-whether PCBs caused small-cell lung cancer-and the Court examined epidemiological and animal studies. The focus was on science. The judge reaffirmed this analysis in Plaza II.

Peer reviews. Next, in citing to "peer review," Havvard noted that a second qualified fingerprint examiner can compare the prints: "In fact, peer review is the standard operating procedure among latent print examiners." This shows a fundamental misconception of peer review as used in Daubert. Peer review means "refereed scientific journals." It is a screening mechanism and is only the first step in the review process, to be followed by publication and replication by other scientists. Review by a second fingerprint expert is a type of quality control; it is not peer review in the scientific community.

This is especially a problem in forensic science because, as Judge Pollak noted in Plaza I:
Even those who stand at the top of the fingerprint identification field…tend to be skilled professionals who have learned their craft on the job and without any concomitant advanced academic training. It would thus be a misnomer to call fingerprint examiners a "scientific community" in the Daubert sense.

The fact that there are articles about fingerprints in forensic literature does not necessarily mean that an article reported the results of scientific testing of the technique.

In Havvard, the court simply accepted the prosecution expert's statement that the "error rate for the method is zero." However, the expert conceded that the ultimate judgment is subjective and that there is no minimum number of points of similarity required. In probing this issue, Judge Pollak wanted to discover the "practitioner error rate" because fingerprint examinations require a fingerprint examiner, and some proficiency examinations on fingerprints were disturbing.

In Plaza II the prosecution offered evidence of the FBI's proficiency testing: one "false positive" in 16 external tests taken by supervisory examiners. "The internal tests taken over the seven years numbered 431….In sum, the 447 proficiency tests administered in seven years from 1995 through 2001 yielded four errors-a proficiency rate of just under 1%."

However, defense experts were highly critical of the way the tests were conducted. Allan Bayle, a fingerprint examiner at New Scotland Yard for 25 years and a Fellow of the U.K. Fingerprint Society, testified that the FBI tests were too easy. Two other defense experts were experts on testing (psychometrics), not fingerprint examiners, but also were highly critical of the FBI proficiency tests. They found the test materials, uninformative attendant literature, and ambiguous testing conditions gave few clues as to what the test makers intended to measure. Both experts believed that the "stratospheric" FBI test success rate "was hardly reassuring; to the contrary, it raised 'red flags.'"

Another Daubert factor-governing standards-also proved problematic. Here Judge Pollak changed positions based on information gleaned from the rehearing and found fundamental uniformity in standards used by the FBI and Scotland Yard. He also considered a British decision, Regina v. Buckley, that examined British fingerprint standards in detail, and parliamentary discussions on the issue in the House of Lords.

In the end, Judge Pollak concluded:
[T]here is no evidence that certified FBI fingerprint examiners present erroneous identification testimony…[or] that the rate of error of certified FBI fingerprint examiners is unacceptably high. With those findings in mind, I am not persuaded that courts should defer admission of testimony with respect to fingerprinting…until academic investigators financed by the National Institute of Justice have made substantial headway on a "verification and validation" research agenda….to postpone present in-court utilization of the "bedrock forensic indentifier" pending such research would be to make the best enemy of the good.

 

Paul C. Giannelli is the Albert J. Weatherhead III & Richard W. Weatherhead Professor of Law, Case Western Reserve University, in Cleveland, Ohio.


This article is an abridged and edited version of one that originally appeared on page 33 of Criminal Justice, Spring 2002 (17:1).

Back to Top

< /