Because science takes center stage in legal disputes of all types, developing the tools to analyze scientific studies critically is essential. Although analysis of the published literature is often sufficient, sometimes a deeper dive into the data underlying the published results is warranted, particularly where results are controversial or contradictory. However, gaining access to the data can be challenging. One approach to consider is a rarely used provision of federal contracting law, Office of Management and Budget (OMB) Circular A-110. 2 C.F.R. pt. 215. It allows public access to data underlying published research findings that the federal government used to arrive at an agency action that has the force and effect of law even if the data are not in the possession of the government.
August 20, 2014 Articles
Peer Review and Beyond—A Deep Dive into the Data
By Cynthia D. Driscoll, Thomas S. Jones, and Charles H. Moellenberg Jr.
Peer Review—Importance and Limitations
“Peer review,” broadly defined, is the evaluation of scientific, academic, or professional work by others working in the same field. Since the late 1960s, many scholarly journals have relied on the peer-review process to assess a draft publication’s scientific methodology, originality, and importance. Studies ultimately published in peer-reviewed journals are often accorded special deference, suggesting that the process guaranteed scientific reliability and validity. Yet, courts, academics, and legal commentators recognize that peer review provides no such guarantee.
The U.S. Supreme Court elevated peer review as an important factor for judges to consider when assessing the reliability of expert testimony. Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993). However, the court in Daubert and others also acknowledged that peer review is not de facto evidence ofscientific validity. See, e.g., id. at 594; David L. Faigman et al., Modern Scientific Evidence: The Law and Science of Expert Testimony § 1:23 (West 2009–2010 ed.).
Other commentators have noted that “peer review cannot be expected to guarantee truth, sound methodology, rigorous statistics, etc.” and provides “no guarantee that it is not flawed or even fraudulent.” Susan Haack, “Peer Review and Publication: Lessons for Lawyers,” 36 Stetson L. Rev. 789, 808 (2007). At best, peer review is some evidence of reliability, but it is not the last word. See, e.g., Valentine v. Pioneer Chlor Alkali Co., 921 F. Supp. 666, 671 (D. Nev. 1996). Other courts have acknowledged that the oft-expressed legal deference to peer review is out of step with academia itself. See, e.g., United States v. Mouzone, 696 F. Supp. 2d 536, 571 (D. Md. 2009).
An important limitation of peer review is that the reviewer typically does not have the data underlying the research, does not ask for the data, and has neither the time nor resources to re-analyze the data in any event. While it is customary in some disciplines for authors to make their data available for review, that is not the case in many. The reviewer’s assessment is based only on the information that the authors chose to be included in the draft. Even when questions are raised, the article’s authors will have far superior information on which to respond or to represent that the methodology and data support the article’s findings. A publisher may also choose to print an article despite peer reviewers’ concerns.
Consequently, peer review is far from a guarantee as to an article’s scientific validity and reliability. The literature is awash with commentary on the serious flaws plaguing peer review, including high rates of unreproducible results, publication of fraudulent papers undiscovered by peer review, selective reporting, study publication bias, outcome-reporting bias, and substantive errors. See, e.g., John P. A. Ionnidis, “Why Most Published Research Findings Are False,” PLoS Medicine, Aug. 2005, at e124; Kerry Dwan et al., “Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias—An Updated Review,” PLoS ONE, July 2013, at e3081. Michael Eisen, a frequent commentator on the issue, has captured the essence of the problem:
First, and foremost, we need to get past the antiquated idea that the singular act of publication—or publication in a particular journal—should signal for all eternity that a paper is valid, let alone important. Even when people take peer review seriously, it still just represents the views of 2 or 3 people at a fixed point in time. To invest the judgment of these people with so much meaning is nuts.
Michael Eisen, “I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals,” Oct. 3, 2013.
As another commentator similarly remarked:
This is not to say that the peer-review system is worthless. But it’s limited. Peer review doesn’t prove that a paper is right; it doesn’t even prove that the paper is any good (and it may serve as a gatekeeper that shuts out good, correct papers that don’t sit well with the field’s current establishment for one reason or another). All it proves is that the paper has passed the most basic hurdles required to get published—that it be potentially interesting, and not obviously false. This may commend it to our attention—but not to our instant belief.
Megan McArdle, “Peer Review Is No Panacea,” The Atlantic (June 28, 2010).
Empirical studies of the power of peer review caution editors not to assume that reviewers will detect most major errors; courts and litigants should also take heed. Evidence of sloppy, flawed, and even fraudulent papers in the peer-reviewed literature abound. A recent analysis published in Proceedings of the National Academy of Sciences found that approximately 67 percent of 2,047 studies retracted from biomedical and life-science journals resulted from scientific misconduct, and about 21 percent of the retractions were attributed to a scientific error. Dariusz Leszczynski, “Opinion: Scientific Peer Review in Crisis—The Case of the Danish Cohort,” The Scientist (Feb. 25, 2013).
Analysis of specific flaws in the peer-review process provides clues for challenging peer-reviewed literature. Questionable science can escape detection during the peer-review process at multiple points along the way. Authors, reviewers, and publishers all play a role. As in any business, financial pressures, cronyism, bias, competition, and ambition are present in the scientific community. “Publish or perish” remains the foundation of academia’s reward system, affecting job opportunities, promotion, tenure, salary, respect, and speaking engagements. Reviewer constraints, such as limited time, lack of access to data, and inadequate expertise, stymie even those with the best intentions. Even commentators from some of the most prestigious science journals, e.g., Science,suggest that “peers” are hard to come by. The two or so unpaid reviewers who volunteer to review a paper in their spare time might not be up-to-date on the latest statistical-modeling methods or all the disciplines represented in the multidisciplinary study under review. They cannot question what they do not understand.
Peer reviewers also cannot question what they do not see. Studies show that reviewers are much worse at spotting mistakes than they or others appreciate. See, e.g., Sara Schroter et al., “What Errors Do Peer Reviewers Detect, and Does Training Improve Their Ability to Detect Them?” J. Royal Soc’y Med., 2008, at 507–14. Selective reporting of details regarding the study design, analysis, and results is not readily apparent or necessarily knowable under typical peer-review conditions.
These process and methodological issues contribute to numerous problems with the scientific literature: methodological flaws, poor analysis, lack of replication, exclusion of inconvenient data, statistical mistakes, undetected errors, and inconsistencies. A majority of published papers report facts that turn out not to be true. See, e.g., Editorial, “How Science Goes Wrong,”and “Trouble at the Lab,” The Economist, Oct. 19, 2013, at 13 and 26–30, respectively; Eisen 2013.
Recent reports of fraudulent or deliberately flawed papers revealed major problems with the process. Problems occur not only in Open Access journals but, even in more traditional, subscription journals. See, e.g., John Bohannon, “Who’s Afraid of Peer Review?” Sci., Oct. 4, 2013, at 60–65; Eisen 2013. No one has the incentive to look into potentially unreliable scientific reports more than those involved in high-stakes litigation. Debunking expert opinions may turn on demonstrating the errors in the scientific studies that the expert published or has cited. For that, we turn to the raw data.
OMB Circular A-110 and FOIA
Records from federally funded research are governed by OMB Circular A-110, titled theUniform Administrative Requirements for Grants and Agreements with Institutions of Higher Education, Hospitals, and Other Non-Profit Organizations. 2 C.F.R. pt. 215. Although OMB regulations do not technically govern all federally funded research, the prominent federal sponsors of health research—the Environmental Protection Agency (EPA) and the Department of Health and Human Services (HHS)—have adopted OMB Circular A-110 into their contracting regulations. See, e.g., 40 C.F.R. § 30.36 (EPA) and 45 C.F.R. § 74.36 (HHS). As a result, researchers receive federal funds conditioned on the access requirements found in OMB Circular A-110.
In 1999, a revision of OMB Circular A-110 was introduced by Senator Richard Shelby and colleagues, and attached to the Omnibus Appropriations Act for FY1999, Pub. L. 105-277. The Shelby Amendment, as it is sometimes called, revised OMB Circular A-110 to provide citizens with greater access to research data, even when the government does not have the data. Now OMB Circular A-110 provides,
in response to a Freedom of Information Act (FOIA) request for research data relating to published research findings produced under an award that was used by the Federal Government in developing an agency action that has the force and effect of law, the Federal awarding agency shall request, and the recipient shall provide, within a reasonable time, the research data so that they can be made available to the public through the procedures established under the FOIA.
2 C.F.R. § 215.36(d)(1).
Under FOIA, 5 U.S.C. § 552 et seq., the U.S. government and its agencies are duty-bound to make available to the public upon request certain information gathered on behalf of the public. Although not technically part of FOIA, OMB Circular A-110 engrafts the procedural mechanisms of FOIA onto such a data request.
OMB Circular A-110 provides greater access to data, but certain limitations, similar to those found in FOIA, apply. It does not reach “[p]reliminary analyses, drafts of scientific papers, plans for future research, peer reviews, or communications with colleagues.” 2 C.F.R. § 215.36(d)(2)(i). Nor does it include “[t]rade secrets, commercial information, materials necessary to be held confidential by a researcher until they are published, or similar information which is protected under law” or information that “would constitute a clearly unwarranted invasion of personal privacy.” Id. “Published” is defined as either when “research findings are published in a peer-reviewed scientific or technical journal; or [a] Federal agency publicly and officially cites the research findings in support of an agency action that has the force and effect of law.” 2 C.F.R. § 215.36(d)(2)(ii).
Despite the potentially far-ranging applicability of OMB Circular A-110 for accessing data, it has been rarely used. See, e.g., Eric Fischer, Cong. Research Serv., Public Access to Data from Federally Funded Research: Provisions in OMB Circular A-110 (Mar. 1, 2013); Lynn R. Goldman & Ellen K. Silbergeld, “Assuring Access to Data for Chemical Evaluations,” Envtl. Health Perspectives, Feb. 2013, at 149.
The Journey—Stonewalls and Other Barriers
We invoked OMB Circular A-110 to obtain a data set underlying controversial findings reported in a published peer-reviewed article correlating very low blood-lead levels with declines in children’s IQ. Because the EPA relied on the study in adopting a new National Ambient Air Quality Standard for Lead, it fell squarely within the scope of OMB Circular A-110. 2 C.F.R. § 215.36(d)(2)(ii). Citing OMB Circular A-110, we filed a request for the data in 2007.
The government denied the initial request, claiming that the agency action was not yet final. Even after it agreed that the request satisfied the requirements of 40 C.F.R. § 30.36, two years elapsed as the EPA shifted its duty to obtain the data to two HHS entities, which, in turn, repeatedly delayed production of the data until they ultimately, sua sponte, reconsidered the EPA’s decision and refused to provide the data.
The HHS officer denied the request and asserted a claim of exclusion under an inapplicable provision, 45 C.F.R. § 74.36(d)(2)(i)(A) (exempting, under OMB Circular A-110, data that are “[t]rade secrets, commercial information, materials necessary to be held confidential by a researcher until they are published, or similar information which is protected under law”). The stated basis was inapplicable because the requested data were neither commercial in nature nor trade secrets, and no other law protected the data from disclosure. The National Institutes of Health (NIH) also considered, and denied, our request on an entirely different basis. NIH claimed that the information and data at issue were not produced pursuant to a federal grant, a position at odds with the publicly available facts.
Appeals were filed, only to be met with agency inaction. After two years, and no sign of the data, we sued the government and other relevant parties. After another year of wrangling in the courts, the research institution agreed to turn over data, and the federal district court ordered the agencies to produce the information that we had requested. Pohl v. U.S. EPA, No. 09-1480 (W.D. Pa. Dec. 2, 2010). Even then, data production was slow and incomplete. Ultimately, however, nearly four years after our initial request of the government, we had the data.
Use of the Data at Trial
Our long journey to get the raw data was rewarded. But getting there took a considered, deliberate, and patient strategy by a team well versed in the science. OMB Circular A-110 provided the key to unlocking the storehouse of data underlying the peer-reviewed study of an opposing expert. The raw data provided the basis for expert testimony ultimately used at trial to challenge the scientific reliability and validity of claims central to the opposition’s case. The data allowed us to criticize the validity of the study’s conclusions through testimony concerning, e.g., data-transcription errors, analytical inconsistencies, incomplete and inaccurate reporting of key findings, and the small subset of the sample population responsible for the study’s finding.
Science, and the testimony that flows from it, is only as valid as its foundation in the raw data and the methodology used to analyze it. Just as any eyewitness might be cross-examined on point of view, recollection of events, or bias, the data underlying scientific opinions provide the first, and potentially most fertile, area for inquiry. Access to the data allows re-analysis, which can either confirm or refute the author’s interpretation of those data. In short, the data are the ammunition. By using FOIA and OMB Circular A-110, those who must rebut expert testimony have a powerful weapon in the war on junk science.
Keywords: litigation, mass torts, OMB Circular A-110, FOIA, Freedom of Information Act, Daubert, expert witness
Copyright © 2018, American Bar Association. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or downloaded or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. The views expressed in this article are those of the author(s) and do not necessarily reflect the positions or policies of the American Bar Association, the Section of Litigation, this committee, or the employer(s) of the author(s).