March 01, 2017

Cultural Perceptions of Science: Trials and Regulatory Decision Making

Jim Wedeking and Brenten H. Williams

Scientists have long been trusted and admired, a perception that persists today. According to a 2009 Pew Research Center survey, 84 percent of the public had a positive outlook on science and scientists. Cary Funk, Public and Scientists’ Views on Science and Society, Pew Research Center (Jan. 29, 2015). Only the military and teachers were viewed as contributing more to society’s well-being. On the work of scientists, 79 percent polled agreed that scientific research has increased the quality of health care, food, and the environment. Id. These views may still, to some degree, reflect an afterglow from a perceived post-World War II golden era of scientific research, evoking the heyday of Bell Labs, the Vannevar Bush-inspired flood of government funding for research (particularly for space exploration and the Cold War), and rapid technological improvements in the American standard of living.

The public’s respect for scientists in the abstract, however, is only one part of a complex view of scientific research. When scientists actually speak to the public about environmental health issues, many report that “people will come to an issue with a great deal of fear, anger, and mistrust if they feel their concerns have already been mishandled.” Valerie Brown, Risk Perception: It’s Personal, Environmental Health Perspectives (Oct. 1, 2014). Given that so many adhere to a “better safe than sorry” view of environmental health risks, it takes very little for them to believe that risks are “mishandled.” Instances of potential public exposure to chemicals are often met with outrage and demands for “zero risk” solutions that provide “pure” air, water, food, or consumer products. Much of the communication between scientists and the public, whether undertaken directly or through public officials, consists of risk communication. Risk communication is “an interactive process of exchange of information and opinion among individuals, groups, and institutions.” National Research Council, Improving risk communication (Washington, D.C.: National Academy Press 1989). Peter Sandman, a professor and consultant specializing in communicating risks to the public, describes how risk managers in academia, government, and industry use risk communication in attempting to change public perceptions of risk. This may involve explaining comparative risks, such as showing that one part per million of chemical “X” in the air or water may be far less hazardous to human health than consuming peanut butter, riding in a car, or being struck by lightning. Yet, despite the evidence behind these comparative risks, the public is rarely swayed that chemicals can be less hazardous than more commonplace risks.

Polls show vast differences of opinion on risk between the public and scientists. A poll by the American Association of Advancement of Science showed that the public is far more skeptical than scientists of the safety of genetically modified foods, childhood vaccines, foods grown with pesticides, nuclear power, and oil and gas offshore drilling. See Funk, supra. Given the gulf in beliefs, frustrated scientific commentators grasp for explanations. Some hypothesize that people filter scientific evidence through cultural lenses based on whether they have hierarchical and individualistic or egalitarian and communitarian values. Daniel Kahan, Hank Jenkins-Smith, & Donald Braman, Cultural Cognition of Scientific Consensus, 14 J. Risk. Res. 147 (2011). Others blame the internet for allowing “willful ignorance” to perpetuate. Lee McIntyre, The Attack on Truth, Chron. Higher Educ. (June 8, 2015). And, of course, political affiliations are routinely blamed as engendering “anti-science” outlooks. See Mischa Fisher, The Republican Party Isn’t Really the Anti-Science Party, The Atlantic (Nov. 11, 2013) (rounding up embarrassing “anti-science” exhortations from both major political parties).

Public Preconceptions about Science

What is missing in these analyses is that cultural perceptions of science are often filtered through cultural perceptions of corporations, which are most likely to apply science for use in everyday life, such as medicines, consumer electronics, building materials, and food additives. This can create a tension between one institution (science), where 79 percent of the public believes it has improved our society, with another, “big business,” in which only 21 percent of the public has confidence. See Jeffrey M. Jones, Confidence in U.S. Institutions Still Below Historical Norms, Gallup.com (June 15, 2015). Corporate scandals abound, although the most prominent of recent years are financial in nature. There are true corporate scientific scandals, such as the promotion of distorted research findings by the tobacco industry, see United States v. Philip Morris USA, Inc., 449 F. Supp. 2d 1, 146–384 (D.D.C. 2006), yet they are comparatively few.

Nevertheless, there is an abiding distrust of science in the hands of corporations. This could result, in part, from a popular culture where corporations are frequently portrayed as villains, using scientific research for profit or world domination such as the Resident Evil franchise’s Umbrella Corporation or the Weyland-Yutani Corporation from the Alien series. While a single movie or book may not make a lasting impression, a steady, decades-long drumbeat of portraying corporations as villains using science against an unsuspecting public can accrete into cultural perceptions. Further, government agencies or officials that side with businesses on matters of public safety are often accused of being “bought” or “corrupted” by corporate interests.

Due to the intersection of science and business, the public views scientific issues with very different preconceptions from those of scientists. On mundane issues, most of the public will trust experts on scientific matters. Where scientists disagree, however, people are more likely to rely on their own nonscientific prejudices. As one writer summed up the problem, lay people are admonished to set aside their unsophisticated divinations and simply “trust the experts,” but when confronted with any matter of controversy—and, thus, one where experts disagree—they have no choice but to fall back on nonscientific means of interpreting scientific matters. Robert Herritt, Scientific Experts and Knowing What to Believe, 48 The New Atlantis, 79, 80 (Winter 2016). Since the scientific method is not taught to the general public (and scientists and philosophers don’t agree on its definition or validity), people can fill that void with nonscientific preconceptions. And those preconceptions can be daunting for lawyers or other professionals who are trying to communicate risk.

Among those preconceptions is a fear of chemicals deemed to be “synthetic” or “unnatural.” Such sentiments can be inflamed by how some media outlets cover scientific studies. Driven by a public demanding definitive answers to scientific questions, the press responds with definitive declarations on research results. Unfortunately, the press’s constant need for eyeballs makes it resemble a carnival barker, and when combined with some writers’ marginal understanding of the studies they summarize, scientific research is reduced to cultural clickbait. See, e.g., Molly Rauch, 8 of the most toxic items you have in your home, Woman’s Day (Mar. 1, 2016); Meredith Engel, Microwaving food in plastic linked to diabetes, other problems: Study, New York Daily News (July 8, 2015); Helena Horton, Cuddling kittens can kill you, warn scientists, U.K. Telegraph (Sept. 19, 2016). This parade of “Everything Can Kill You”-style headlines can produce a subculture where anything “synthetic” or “unnatural” is seen as a source of danger. When bombarded with dour news of how everything is “toxic,” some become primed to accept nothing but bad news and are extremely skeptical of explanations regarding why these risks are exaggerated.

Those not inspired to purge the “synthetic” and “unnatural” from their lives may still adopt a cynicism toward scientific research as headlines also tout constant reversals, where unhealthy things are now healthy again and vice versa. Such has been the journey of eggs, red meat, fat, sunlight, and many other things over the years. Otherwise, readers are subjected to a deluge of health stories promoted because they assault conventional wisdom. See, e.g., John Cloud, Why Do Heavy Drinkers Outlive Nondrinkers?, Time Magazine (Aug. 30, 2010); Sam Bailey, Pass the Easter Egg! New study reveals that eating chocolate doesn’t affect your Body Mass Index!, The Daily Mail (U.K.) (Mar. 31, 2015) (touting a hoax study promoted by science journalist and serial prankster John Bohannon). Combined with coverage of various scientific failures, scandals, and frauds—ranging from the inability to replicate widely cited studies to outright data fakery—the public is whipsawed between “everything can kill you” and “everything we told you before was wrong.”

Bias also shapes cultural views of scientific research. In an era obsessed with disclosure, whenever industry research weighs in on scientific controversies, the press frequently deems industry funding to be the most important aspect of that research. This was no more evident than the recent media fury surrounding a JAMA Internal Medicine article on research funded by a sugar trade association in the 1960s, pinning coronary heart disease on fat and cholesterol. See, e.g., Julia Belluz, How the sugar industry has distorted health science for more than 50 years, Vox.com (Sept. 12, 2016); Ashley May, Study: How the sugar industry lied about heart disease, USA Today (Sept. 13, 2016); Editorial, The sugar industry used Big Tobacco-techniques, San Francisco Chronicle (Sept. 13, 2016). Missing from the avalanche of opprobrium was any actual claim that the research was wrong based on what was known at that time or that it reached conclusions contrary to the government-funded conventional wisdom. Yet, the press commonly eschews covering the substance for portraying industry research as merely lobbying by other means, scheming to block health protective regulations, or pitting “profits” against “science.”

Presentation of Scientific Testimony in Litigation

Media coverage of this type can do more damage to the reputation of scientific research than purportedly biased research. An obsession with industry-funding bias can not only create a belief that industry-funded research is per se invalid, but that science itself is wholly subjective. Under this view, scientific work is a mercenary venture where researchers, for enough money, can develop studies that arrive at predetermined conclusions. As one of the few sober media observers on Sugar-Gate noted, it would be insane to believe that a smattering of industry-funded studies on dietary fat and sugar, swimming in an ocean of millions of studies on nutritional science, somehow seized control over academic consensus and government policy. Andrew Brown, Let industry fund science, Slate.com (Sept. 21, 2016). Yet, obsession over bias, and a concomitant neglect of the research’s substance, creates a belief that scientific research is inherently fraudulent. Or, as stated by a dissenting commenter on the Brown op-ed: “You can spin science any way you want.” Gordon Wagner, Comment on Let industry fund science, Slate.com, posted Sept. 21, 2016. This means that, when presenting competing scientific testimony to juries, the contest too frequently devolves into a morality play where adversaries seek to identify “bad guys” with interests to protect. Needless to say, this makes communicating with jurors on scientific matters much more difficult.

Skewed public perceptions of science involve long-entrenched cultural influences requiring long-term cultural solutions. For defendants, the conventional wisdom is that the average juror lacks the capacity to grasp complex scientific concepts and will instead base decisions on fear and moral pleas. Making things worse, defense counsel often underestimate jury distrust of their experts due to bias. Counter-intuitively, we believe that the presentation of complex scientific evidence—especially in a scenario involving dueling experts—requires a broader presentation to overcome cultural perceptions of scientific experts, not one that is briefer, sharper, or more forceful. What follows are a few of the approaches that can aid lawyers representing corporations when presenting complex scientific disputes in litigation.

Dealing with Bias

First, don’t shy away from the topic of bias. In cases of dueling experts, opposing attorneys can portray defense experts as marionettes with diplomas, hired to provide a veneer of scientific cover for the misdeeds of their employers. This plays into the widespread cultural distrust of corporations. Rather than avoid the question of bias, or hope that really good testimony will dispel the issue, lay out both witnesses’ biases. This includes both a positive and a negative argument that can work in conjunction with one another.

The positive argument acknowledges that industry researchers have biases but that these biases play a valuable role in many scientific fields. Even research explicitly undertaken to protect an industry’s product or ingredient is valuable to the field itself, not just the company. When studies claim that an industry chemical causes harm, reviewing the validity of these claims—such as checking for errors, methodological weaknesses, and replication—is a core component of the scientific method. The scientific method espouses no principle that, once a study concludes that a chemical is harmful, all inquiry into that chemical must stop, compelling researchers to put down their pencils and accept the finding. Quite the opposite: confirmatory study is necessary. There is now a cottage industry devoted to reporting study errors, including the work of John Ionnides—who discovered that the findings of many frequently cited medical research articles could not be replicated—and an entire website tracking the significant increase in retractions. See John Ionnides, Why most published research findings are false, PLOS Med 2(8):e124 (2005) (discussing proliferation of Type I errors); www.retractionwatch.com; see also Christie Aschwanden, Science isn’t broken, fivethirtyeight. com (Aug. 19, 2015). That industry has a motivation to ferret out errors is hardly an assault on science. Instead, it is vital to the process.

Contrary to common belief, industry studies have no more ability to deceive than others. Although juries may not understand much about a research paper’s content, it is critical they learn that all published research contains key components: the hypothesis tested, the methodology used, the statistical analyses performed, the resulting data, and the conclusion drawn from the result. In other words, the old adage from primary school math classes holds true: everyone must show their work, regardless of funding sources. Thus, bias is not hidden in some black box. Instead, the researchers’ methodologies and assumptions are in the open, either noted explicitly or evident from the choices made. The only true deception is through fabricating data—a rare yet shameful practice that is hardly the exclusive province of industry-funded studies. See, e.g., Marcia McNutt, Editorial retraction, Science (May 28, 2015) (retracting social science paper using fabricated data discovered through independent attempt at replication); Rajendra S. Kadam et al., Retraction of Hypoxia Alters Ocular Drug Transporter Expression and Activity in Rat and Calf Models: Implications for Drug Delivery, Mol Pharmaceutics, 12(7): 2259 (2015) (retracting paper relying on falsified liquid chromatography-mass spectroscopy data). These falsifications are most often uncovered by other investigators attempting to replicate findings—something that is far less likely to happen if industry-funded research is driven from the field.

The negative argument on bias examines how, despite efforts to be deliberate and fair, all researchers, including the opposing expert, have their biases. Depending on the opposing expert and his or her affiliation, financial, ideological, and social biases can be explored. Biases can include anything ranging from the need to obtain government research grants (which are influenced by political priorities) and to avoid displeasing grant funders, to frequently publish in journals (often having a bias for finding “something” over “nothing”), or otherwise advance the experts’ careers in harmony with the prevailing views of senior academics. Sometimes, the prevailing orthodoxies become so entrenched that any study contradicting what is “true” can endanger careers. See, e.g., Coco Ballantyne, Five years after being fired from one post, sun exposure proponent keeps up the fight, Scientific American News Blog (Jan. 30, 2009) (discussing Dr. Michael Holick’s forced resignation from the Boston University Dermatology Department for publishing research on Vitamin D deficiencies conflicting with department head’s beliefs).

Entrenched orthodoxies create an unorthodox conflict of interest and can strongly bias researchers to preserve certain findings. For instance, affiliations with groups that undertake lobbying or other policy activities can limit the conclusions an expert can reach. Some may aggressively argue that even studying certain issues should be forbidden. See Gary Taubes, The (Political) Science of Salt, 281 Science 898, 899 (Aug. 14, 1998) (discussing opposition by a National Institute of Health division director to even acknowledging differing opinions on salt consumption or else it “play[s] into the hands of the salt lobby” and “undermine[s] the public health of the nation”). Personal investments in research outcomes also impose strong biases. Some researchers are always looking for the next “toxic” chemical, food, or product. Where scientists stake a significant portion of their careers on findings of harm, acknowledging contrary research could destroy their reputations and careers. Some choose to go down swinging even when the tide of research has turned against them (thus, the quip that science progresses one funeral at a time). Attorneys should think hard about what constrains an opposing expert’s opinions, marking off territory that their self-interest prohibits them from entering. If the opposing attorney wishes to paint your expert as a “hired gun,” then you can show their expert’s biases as well.

This would seem to portend a stalemate, with the best case scenario leaving the jury to pick between two equally defiled experts. But unlike the opposing counsel, who may seek to rest on accusations of monetary bias, have your expert talk through how the profession functions with biases. For instance, many believe in the principle that science is self-correcting, where if a finding is wrong, others will attempt to replicate the study until a critical mass of researchers corrects the problem. This can happen only when scientists with different points of view test the validity of research. Where the political priorities of government funding or academic orthodoxies may converge on a certain conclusion, then important questions of scientific research would go untested if not for industry-funded research. And, of course, anti-industry biases are needed to check industry-funded research. Eventually, after many years and dozens of studies, enough research amasses to a point where the biases wash out and a general consensus congeals. This may help the jury to understand that bias is not necessarily malignant and can actually aid in the pursuit of knowledge.

Highlighting Consensus among the Experts

Second, highlight consensus among the experts in order to better emphasize the substance of their disagreements. Expert witnesses can be extremely combative, vociferously contesting two or three areas of disagreement. This may be where jurors are more likely to tune out, believing that the extent and stridency of disagreement are most likely the product of monetary bias (e.g., this is what the hired gun was hired to shoot at). This may be alleviated, however, by spending more time on what the experts agree upon, something often glossed over. By demonstrating that the dueling experts actually agree on much, including fundamental aspects of their fields, it can make the disagreements appear more civil and a matter of genuine professional divergence. Highlighting agreement benefits defense experts who are more likely to be drawn from private consulting firms because it establishes them as being legitimate members of the scientific field instead of outsiders chasing paychecks.

Third, peer review requires an explanation. For many, including judges, the concept of peer-reviewed research is deceptive. The U.S. Supreme Court described peer review as “a component of ‘good science,’ in part because it increases the likelihood that substantive flaws in methodology will be detected.” Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 593 (1993). Although Daubert recognized the limitations of peer review, jurors may not. Many mistakenly believe that peer review involves other scientists painstakingly evaluating a paper’s findings and only publishing it if they substantively agree. Few may understand that reviewers are looking mainly to evaluate an article’s relevance to the journal, review basic methodology, filter out overly speculative interpretations, improve readability, and catch plagiarism. Reviewers do not necessarily endorse a paper’s conclusion. While this may ensure higher quality papers, jurors can misinterpret this to mean that only “correct” findings are published. Some experts promote this misinterpretation, such as referring to peer review as a “gold standard” for scientific research. This is why some plaintiffs attorneys have chosen to have their litigation expert’s theories published in peer-reviewed journals before they file suit. Unfortunately, as many scandals, retractions, and failed attempts to replicate research have shown, the institution of peer review is facing a crisis of confidence.

Where a single peer-reviewed study becomes critical to a case, jurors need to know what “peer review” actually means. In reality, a unique finding from a single peer-reviewed study has little use except for being the subject of follow-up study, with confidence gradually building after repeated rounds of research. Although an attack on the institution of peer review would be both fruitless and unfair, the recent problems with peer review can help counsel test the robustness of an expert’s key study. Your own expert can guide you through the common pitfalls exposed over the past decade or so, such as publishing in “predatory” pay-to-publish journals, or data-dredging, HARKing (hypothesizing after the results are known), over-reliance on a 0.05 p-value, or various statistical errors.

If significant flaws surface in an expert’s key peer-reviewed study, your own expert witness will need to take a broader approach in his or her testimony. In addition to the substantive critiques, jurors will need to learn how a “gold standard” peer-reviewed paper could contain errors. After all, it’s not just your expert versus theirs; it’s also your expert versus the peer reviewers and editorial board of a well-respected journal. Without testimony on the documented problems with the peer review process, your expert could be seen as a discordant maverick, taking on a well-respected institution alone. As with bias, it is important to present these shortcomings as the product of normal, unintentional, and frequently unavoidable human error that is well understood and eventually overcome through repeated study of the same phenomenon.

Communicating Scientific Evidence to Jurors

The last key point is to slow down expert testimony. This seems counterintuitive but that instinct is, itself, part of a failure to connect with juries on scientific issues. Lawyers often think that jurors are simply incapable of comprehending complex scientific testimony. Instead, jurors carry a multitude of cultural misperceptions about science that contrasts sharply with what they see in a courtroom, leading them to tune out. For instance, television dramas such as CSI create fantastically unrealistic expectations about what real forensic science can reliably accomplish. See Hon. Donald E. Shelton, The ‘CSI Effect’: Does It Really Exist?, 259 Nij Journal (Mar. 2008); Brad Reagan, CSI Myths: The Shaky Science Behind Forensics, Popular Mechanics (Dec. 17, 2009). Although popular culture expects science to unequivocally answer phenomenally tough questions, the reality is that researchers can’t even agree whether salt is bad for us. See Melinda Wenner Moyer, It’s Time to End the War on Salt, The Scientific American (July 8, 2011).

Thus, an expert witness’s first goal should be to establish more realistic expectations for the jury. This means more than a perfunctory explanation that “the dose makes the poison.” Depending on the specifics of the case, slowly build a more complete overview of the scientific field, who studies it, and why. Like a television show, backstories about the field or the substance at issue should be established, such as long-held controversies in the field, the causes of disagreement, why they may have endured for decades, and common limitations to the types of studies performed. This not only gets the basic point across that “science is hard,” but it establishes that differences of opinions, although they may be wrought with biases and almost clan-like quarrels, are honest and not manufactured just for litigation. Further, it colors your expert as humble—a valuable trait when many experts can appear as condescending and brimming with absolute confidence.

Communicating scientific evidence to jurors requires a better understanding of the potential range of opinions they hold. For lawyers and the scientists they work with, this means grappling with cultural views unmoored from principles of either law or science. Instead of assuming that you work from a blank slate, create strategies for countering popularly held views of bias, the scientific method, how scientific institutions function, and even paranoia. We think that this is best done with a “more is more” approach that allows a jury or other lay audience to understand that scientific research is a fundamentally human venture that includes the whole gamut of human flaws.

Jim Wedeking and Brenten H. Williams

Mr. Wedeking is counsel with Sidley Austin LLP’s environmental practice group in Washington, D.C. He may be reached at jwedeking@sidley.com. Mr. Williams is a senior associate health scientist with Cardno ChemRisk in Brooklyn, New York. He may be reached at brenten.williams@cardno.com.