Let Data Speak, But Do Not Torture Them
The discovery of unexpected correlations, like the release of Nicholas Cage’s films and pool drownings, should come as no surprise. In his book The Improbability Principle, David Hand explains that what is at work is “the Law of Truly Large Numbers.” He defines the principle succinctly: “With a large enough number of opportunities, any outrageous thing is likely to happen.” In other words, if one looks hard enough, one may identify statistical coincidence that is neither causal nor predictive.
The Law of Truly Large Numbers is also a reason to exercise caution when we analyze data and draw inferences from them. For example, with a large number of tests comparing the effectiveness of a drug with that of a placebo, it is almost guaranteed that at least one comparison will appear to show that a drug is “effective,” when, in fact, it is not. This type of spurious finding often results from a process known as data mining. Other colorful names for this concept include “data dredging,” “data snooping,” and “data torturing.” As one author put it, such practices are “the analytical equivalent of bunnies in the clouds, poring over data until you found something. Everyone knew that if you did enough poring, you were bound to find that bunny sooner or later, but it was no more real than the one that blows over the horizon.”
As another example of the danger of data torturing, consider the study that a team of neuroscientists once conducted on a salmon whose brains underwent fMRI scans:
When they presented the fish with pictures of people expressing emotions, regions of the salmon’s brain lit up. . . . [H]owever, as the researchers argued, there are so many possible patterns that a statistically significant result was virtually guaranteed, so the result was totally worthless. . . . [T]here was no way that the fish could have reacted to human emotions. The salmon in the fMRI happened to be dead.
That dead salmon saved the researchers from some misleading discoveries. An economist is unlikely to have the benefit of such dead giveaways. Instead, the economist needs to take extreme care not to “cherry pick” findings just because they support the client’s or lawyer’s preferences.
Data mining is not just a theoretical or academic concept. In fact, it has been alleged as a basis for excluding experts’ reports and testimony in several recent cases. In In re Processed Egg Products Antitrust Litigation, the plaintiffs’ expert used a regression model to relate prices to other factors. The defendants asked the court to disregard the plaintiffs’ expert’s regression model because, when the model was estimated using only a subset of the data, specifically using “just one certain [d]efendant’s transactions,” some aspects of the regression results changed. The plaintiffs countered that the defendants’ results were “the product of inappropriate ‘data mining.’” Judge Gene E.K. Pratter denied the defendants’ challenge against the model and found the plaintiffs’ data mining counterargument persuasive.
In another antitrust class certification case, In re Pool Products Distribution Market Antitrust Litigation, the plaintiffs filed a motion to exclude the testimony of the defendants’ expert, based in part on an argument that alleged data mining bias rendered the testimony unreliable. In particular, according to the court’s order, the defendants’ expert estimated the plaintiffs’ expert’s regression model using subsets of the data and argued that the results show that “common factors do not predominate in determining pricing across the class.” Plaintiffs argue that applying their regression model to subsets of data is “impermissible ‘data mining’.” Citing to various literature and case laws, the court concluded that the defendants’ expert’s sensitivity check was “sufficiently reliable” and ultimately denied the motion to exclude the testimony.
Finally, in Karlo v. Pittsburgh Glass Works, an age discrimination case, Judge Terrence F. McVerry found one expert’s analysis of impact to be “improper” because it did not correct for “the likelihood of a false indication of [statistical] significance.” He added that it was “data-snooping, plain and simple.” The Third Circuit, however, vacated Judge McVerry’s ruling, stating, “We conclude that the District Court applied an incorrectly rigorous standard for reliability,” although it did not expressly refer to the alleged data-mining issue. Given the discussions in these cases, this important but subtle statistical concept will continue to receive well-deserved attention in the legal domain.
Conclusion
At this point, the reader may have many additional questions. Indeed, time and space as well as pedagogical goals limit what this article can offer, but counsel’s interaction with their expert will have fewer limitations. As econometrics has become an indispensable and widely used tool in both merger control and antitrust litigation, by probing the questions discussed in this article, counsel can clearly communicate the expert’s work to the finders of fact and at the same time make the expert’s analysis more robust. Doing so can potentially reduce Daubert and other litigation risks. In fact, failing to appreciate these key concepts could easily result in deeply flawed and misleading analyses.