February 27, 2019 Articles

Peering Behind the Curtain: A Closer Look at Peer Review and Predatory Journals

Resources and strategies that can help identify such publications.

By Bill Childs

The Daubert Court, in interpreting Rule 702 of the Federal Rules of Evidence, laid out various nonexclusive criteria for consideration in evaluating proposed scientific evidence, and one of them was peer review. As the Court put it, “[t]he fact of publication (or lack thereof) in a peer reviewed journal . . . will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.” Daubert v. Merrell Dow Pharms., 509 U.S. 579, 594 (1993). Peer review, or the absence thereof, was mentioned repeatedly by the New Jersey Supreme Court in endorsing Daubert in the recent decision in In re Accutane Litigation, 191 A.3d 560, 586, 592, 594 (N.J. 2018). Among other things, the court noted that the plaintiffs’ expert had not submitted “his ideas . . . for peer review or publication,” considering that failure to be a strike against his methodology. Id. at 572.

Compared with other Daubert factors (or those described in the subsequent comments to Rule 702), the presence or absence of peer review may seem more binary than other factors—i.e., easier for a court to evaluate. It is either there or not there, it seems. Not so, either in the traditional sense of peer review or in the changing world of things that now are called peer review. Given this perceived simplicity, though, it frequently gets less attention than it deserves. Litigants should think about peer review as being more complex than it appears, and in some specific contexts, additional exploration—whether through discovery into your adversaries’ experts or early investigation of your own potential experts—may make sense.

Daubert versus Predator

One fascinating consequence of this consideration of peer review in the Daubert context is the potential for experts to publish litigation-related work in what are called “predatory journals” (sometimes also called “vanity publications”). See Kouassi v. W. Ill. Univ., 2015 WL 2406947, at *10–11 (C.D. Ill. May 19, 2015); Jeffrey Beall, “Predatory Publishing Is Just One of the Consequences of Gold Open Access,” 26 Learned Pub’g 79–84 (2013); John Bohannon, “Who’s Afraid of Peer Review?,” 342 Sci. 60–65, Oct. 4, 2013.

Predatory journals, like the villain of the 1987 film Predator and its 2018 reboot, camouflage themselves. They make themselves look not like the Central American jungle background but like legitimate medical or scientific journals. Their publishers’ websites generally look like legitimate publishers’ websites (if sloppy at times), their PDFs look like “real articles,” and their submission process might even look normal. They’ll even claim to have peer review and editorial boards and all the rest of what you expect from journals. Like the Predator, they even try to manipulate their editorial voices to sound like real journals.

These journals are, however, just aping the façades of real journals. They typically do not have legitimate peer-review processes—or possibly any review processes at all. Frequently, if an author pays the exorbitant fees, the submitted article will be published.

Myriad examples exist revealing such journals as frauds. My favorite is probably the publication of a case report of “uromysitisis,” an entirely fictional condition first referenced in Seinfeld as a condition from which Jerry claims to suffer after being arrested for public urination, by the purported journal Urology & Nephrology Open Access Journal. The author of the intentionally nonsensical article—not a urologist, nor a medical doctor at all—wrote about his experience here. After that article’s exposure as an obvious fake, and something that even the most casual of reviewers should have rejected, the article was removed, but the “journal” is still up and publishing on the MedCrave site, described, a bit awkwardly, as “an internationally peer-reviewed open access journal with a strong motto to promote information regarding the improvements and advances in the fields of urology, nephrology and research.” A few years earlier, a computer scientist published an article consisting solely of the phrase “Get me off your [obscenity] mailing list,” with related graphs, repeated for eight pages. That journal remains in existence as well.

Such journals are largely set up to entrap new (and naïve) scholars who are under tremendous pressure to publish for promotion and tenure purposes, but they also can provide an opportunity for dubious expert witnesses to publish something that they can cite as “peer reviewed,” especially as courts more and more often note the presence or absence of peer review. It is not news to many litigation experts that having peer review for some of their more outlandish assertions can increase the odds of their testimony being admitted. If an expert has in fact published in a predatory journal (and it can be shown that the expert knew or should have known that the journal was predatory), that fact should count against the admissibility of the testimony.

Given the camouflage, it is fortunate that there are resources and strategies that can help identify such publications. Retraction Watch, published by the Center for Scientific Integrity and headed by science writer Adam Marcus and physician and writer Ivan Oransky, while not focused solely (or even largely) on predatory journals, is an accessible look at the world of retractions “as a window into the scientific process.” They keep an eye out for interesting developments in the world of predatory journals, and scientific publications generally, and their coverage is what made me suspicious when, in one of my cases, an adversary’s expert’s article was published by a MedCrave journal (home to the Seinfeld article). Retraction Watch’s coverage of that article led to what I assume will be the only time in my career I had the chance to ask a PhD/MD if he was familiar with Seinfeld and if the show is, in fact, fiction, given that he published—and in fact was listed as an editor of—another MedCrave journal.

There is also a list of suspected predatory journals archived at Beall’s List. The appearance of a journal on that list is not conclusive evidence that it is predatory, but it is enough to raise questions. The removal of a journal from the Directory of Open Access Journals for “editorial misconduct” or for “not adhering to best practices” (see list, here) is another giveaway. The Loyola Law School’s “Journal Evaluation Tool” can also provide a useful rubric, accessible to lawyers not trained in science, for evaluating whether a journal is likely legitimate or not. And your own experts can likely provide feedback to you about journals.

Most experts will not have published in predatory journals. But it is still worth the time to explore the question, especially about pivotal articles on which the experts are relying—whether the expert is your adversary’s or your own. Even if the publication offer was innocently accepted (i.e., even if the author did not realize he or she was publishing in a predatory journal), the publisher’s lack of rigor in evaluating the article should at a minimum eliminate any weight given to the peer-review factor. And if an author has intentionally published in such a journal, that should be the equivalent of an intentionally false statement in a curriculum vitae.

Not All Peer Review Is the Same

Of course, these relatively new faux journals are not the only way experts get published. Consider the most traditional form of peer review, where editors of a journal have outside reviewers, usually with their identities screened from the authors, evaluate the quality and originality of the work, confirming that the methodologies presented appear legitimate and that the conclusions reached are reasonable based on what’s described. Given that those goals line up nicely with the goals of a Daubert analysis, it is sensible indeed for a court to look at that as a potential indicator of reliability—indeed, that’s why peer review is a factor in the first place.

But even if a proffered expert testifies to having followed a methodology that matches something in a peer-reviewed publication, it is often worth at least a few deposition questions about the review process and a line in your subpoena duces tecum requesting copies of any materials the author has received relating to the review, or to attempt some third-party discovery on the journals in question—though some courts may limit or refuse that discovery. See, e.g., In re Bextra & Celebrex Mktg. Sales Practices & Prod. Liab. Litig., 249 F.R.D. 8 (D. Mass. 2008) (granting protective order for nonparty medical journal publisher, expressing concerns about a chilling effect). The propriety of allowing such discovery is beyond the scope of this article, but I addressed it in more detail in “The Overlapping Magisteria of Law and Science: When Litigation and Science Collide,” 85 Neb. L. Rev. 643 (2007).

If you obtain peer-review notes, it is possible you will find that a reviewer recommended the removal of a conclusion that the expert is now presenting or that the reviewer warned against a particular inference from what is in the article. Making it even easier, some journals—traditional and, more often, “open access”—are now posting their reviewers’ comments online. Even if you do not find anything relevant, most experts will readily concede that peer review reflects at most an “approval” of the overall approach and is not a guarantee of correctness as to conclusions. And sometimes you will be able to establish that the study in question was based on flawed data or that the work done for litigation did not, in fact, use the same methodology as that in the publication. See, e.g., In re Mirena IUS Levonorgestrel-Related Prods. Liab. Litig., 2018 WL 5276431, at *11–13, *28, *34, *37–38, *50–51 (S.D.N.Y. Oct. 24, 2018) (rejecting expert’s reliance on “repudiated” open-access journal article by author that did not disclose retention as a plaintiff’s litigation expert); In re Viagra Prods. Liab. Litig., 658 F. Supp. 2d 936, 945 (D. Minn. 2009) (reversing an initial denial of defendants’ Daubert motion after learning of flaws in underlying data and processing, noting that “[p]eer review and publication mean little if a study is not based on accurate underlying data”); Palazzolo v. Hoffman La Roche, Inc., 2010 WL 363834, at *5 (N.J. Super. Ct. App. Div. Feb. 3, 2010) (finding no abuse of discretion in excluding an expert’s conclusion based on conclusion that the expert did not in fact use the methodology claimed to have been used in the underlying peer-reviewed study).

Sometimes, even in a more traditional context, the peer review that was performed was not what was likely pictured by the Daubert Court, particularly when the work at issue is outside the so-called “hard sciences.” In a publicized example, the review of a history-oriented book about the lead and vinyl chloride industries, written by frequent plaintiffs’ experts and published by the University of California, involved reviewers known to—and in some cases recommended by—at least one of the authors. See 85 Neb. L. Rev. at 660–63 (describing this situation; original book website was removed). Whether or not that review was adequate for the academic purpose, it was materially different from, say, the work of reviewers of a double-blind clinical trial, and the facts surrounding it seem plainly relevant to how much weight a court should give it under Rule 702 and Daubert. Without that discovery, the court may well not have learned about what “peer review” meant in that context.

Consider also the scenario where an expert says that her methodology has gone through peer review but the article has not yet been published. Again, it may be worth pursuing more details, especially if the expert seems likely to cite to that review in defending her position. If it has not yet been accepted for publication, consider requesting a copy of the comments the expert received from the reviewers. If those comments are provided, they may be helpful; if their production is refused, the fact of that review should be rejected as a basis for admissibility.

What to Watch Out For

Fundamentally, the important thing is to look through your and your adversaries’ experts’ curricula vitae with care, especially as to articles that are directly on point with the issue you are addressing. It is not enough to think about what the articles say, and it also is not enough to think to yourself, “Well, that sounds like a legitimate journal.” Look at the publisher’s site; look for hints in the article itself; and do some searches. Ask a few questions of the expert about author fees and what the peer review entailed, and throw in a document request to see if there is something worth exploring further. And if you are dealing with what you think is a predatory journal, be ready to teach a judge about what that means; as of this writing, no court has referenced “predatory journals” in a reported Daubert decision.
 

Bill Childs is a partner in Bowman and Brooke’s Austin, Texas, office.


Copyright © 2019, American Bar Association. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or downloaded or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. The views expressed in this article are those of the author(s) and do not necessarily reflect the positions or policies of the American Bar Association, the Section of Litigation, this committee, or the employer(s) of the author(s).