Scientific Evidence in Environmental Litigation

Vol. 30 No. 3

By

Mr. Sali has his own legal practice in Portland, Oregon. He may be reached at kevin@salilaw.com.

Scientific evidence is often the centerpiece of an environmental case, as it can be the most powerful evidence of a defendant’s conduct and its effects. It can also be some of the most controversial evidence in such a case; in both governmental and private litigation, scientific evidence and the conclusions drawn from it tend to be hotly contested. Add in the high degree of deference often given to expert witnesses by judges and jurors and the lack of comfort some lawyers have in working with this type of evidence, and it is easy to see how the scientific battlefield can present some of the greatest challenges in environmental litigation.

 Download a printable PDF of this article (membership required).

The good news is that the scientific portion of the case does not have to be a black box and doesn’t have to be delegated out for the proverbial “battle of the experts.” It is entirely possible for attorneys to master the scientific evidence in their cases and to work with that evidence with the same level of confidence they have regarding other types of evidence.

The key task in any case is to assess what each particular piece of scientific evidence is being used to prove and how well it supports that proposition. For example, several years ago, I represented a client in a felony water pollution case in which the state alleged that my client was polluting the nearby water through his business operations, which involved fruit juice production.

A significant portion of the prosecution’s evidence was based on measurements of biochemical oxygen demand (BOD). To prove that my client’s business operations were polluting the surrounding water, the state relied on two types of BOD measurements: the BOD of the stormwater leaving a discharge point on my client’s property and the relative BOD values of the recipient water body upstream and downstream of that discharge point. The prosecutors based this part of their pollution argument on the assertion that the BOD of the discharged water was higher than it should be for legitimate stormwater and that the BOD of the recipient water downstream of the discharge point was higher than that upstream.

For this to be a valid argument, what would we have to know or assume? At a minimum, we would need to know that (1) the BOD method in general is sufficiently trustworthy to support the type of forensic determinations at issue in my client’s case; (2) what it measures is something relevant to the case (i.e., it measures the type of pollutant that my client was suspected of discharging); (3) the method was performed appropriately in this case, producing accurate and reliable results; and (4) at the interpretive level, those results reasonably support the conclusion that the water coming off my client’s property contained an inappropriate level of some pollutant.

A similar set of necessary assumptions would accompany any other conclusion purportedly based on scientific measurements, and it is worthwhile considering how to test these assumptions. An attorney’s first exposure to the scientific evidence in an environmental case may be in the form of technical results and corresponding conclusions. We might see, for example, a government analyst’s or private expert witness’s report stating that the result of the testing is X, which shows that the client must be polluting with substance Y. As attorneys, we might instinctively assume that these results and conclusions are probably correct; after all, these analysts know more than we do, and they would not be putting these things in formal-looking reports if they were not pretty sure of what they were saying.

That approach would be a huge mistake. In my experience, significant questions as to all of the types of points and assumptions outlined above arise with troubling frequency in environmental cases. I used to think this was the exception; as time goes by, I am beginning to think it is more likely the rule.

Take the BOD example. Again, at a minimum, given its use in my client’s case, I would have expected to learn that it was a proven, reliable method of producing meaningful conclusions relevant to the type of pollutants at issue in that case—specifically, certain “organic” components relating to fruit juice production.

In fact, it quickly became quite clear that it was not. When we delved into the technical literature, we saw phrases such as “extreme variability in test results”; “no way to include adjustments or corrections to account for the effect of [a series of factors known to affect BOD results]”; and “no method for establishing bias of the BOD procedure.” At an even more basic level, we learned that the particular type of BOD measurement used by the state’s analysts was considered “generally . . . not useful for assessing . . . organic material”—which, again, was the type of material involved in our case. These statements were not taken from a defense expert’s rebuttal report—this was the government’s own technical literature describing the limits of its chosen techniques. Right out of the gate, then, it appeared that the key scientific evidence at the center of the state’s case was simply incapable of supporting the state’s conclusions.

As time has gone on, I have found that this disconnect between the true quality of scientific evidence and the weight it is asked to bear in environmental cases is disturbingly common. There are several reasons for this.

First, the types of measurements at issue in environmental cases tend to be inherently challenging ones. We tend to think of scientific measurements as being the way they look on the television show CSI. An analyst pops a sample into a machine, punches a few buttons, and bam—the answer we are looking for shows up on the screen. Some forensic techniques—for example, DNA analysis, and chromatographic tests for certain drugs and other substances—can indeed approximate that model.

Other methods, including many environmental methods, are much hazier. These methods can be difficult for a number of reasons, including the complex composition of most “real-world” media such as water or soil samples and the fact that many of the tests used are relatively nonspecific. The result is that environmental measurements often don’t support CSI levels of specificity or accuracy.

In addition, many of the methods used in environmental litigation were not originally developed for forensic purposes. The significance of this fact becomes apparent when such methods are compared to others that were developed for such purposes.

The more traditional forensic disciplines have taken a beating in recent years. In 2009, the National Academy of Sciences issued a comprehensive report highly critical of several fields, and more recently, the FBI’s microscopic hair comparison unit had a highly publicized fall from grace. There are good reasons for these criticisms. Some of these traditional methods rest on dubious principles and were developed and administered by agencies with highly partisan interests, with predictable results. (“Good news, Detective—yet again, my highly subjective analysis does support your case theory!”)

At the same time, because these disciplines were designed for courtroom use, they were developed with that use in mind—with, at least in theory, attention to what could legitimately be “proven,” safeguards to avoid error, and an assessment of potential sources of uncertainty.

By contrast, some of the methods that end up in court in environmental cases were initially developed for nonforensic purposes and ultimately “grew up” around such purposes. For example, a particular environmental measurement may have been developed to aid in preliminary assessments of whether a particular water body needed further attention or to identify areas with potential health risks. Such methods might be entirely appropriate for such purposes but may be insufficiently precise or accurate to meet the more demanding standards for use in litigation.

Similarly, measurements designed for nonlitigation purposes may have built-in biases that are appropriate for their original purposes but inappropriate for courtroom use. For example, along with BOD, my water pollution case involved measurements of total suspended solids. A review of the corresponding procedures suggested that the results would be biased toward artificially high measurements. When asked about this on cross-examination, the state’s analyst readily agreed, explaining that “[i]f you’re going to make an error, you want to always err on the side of—most likely there’s pollution. Because you’re protecting human health.” Again, that is an entirely appropriate approach for measurements designed to identify and remedy public health risks—but not for measurements that could establish criminal guilt or civil liability in a courtroom.

Of course, even if a method is generally reliable, it still has to be applied correctly in an individual case. And this, too, is far from a sure thing, for a number of reasons. Aside from the ever-present possibility of ordinary human error, there is the fact that the particular tests at issue may have been performed before anyone had reason to know that the results would someday be used in litigation.

For example, an environmental case may involve historical data such as routine periodic measurements taken by a business pursuant to permit requirements. Because of practical realities, the technicians—whether in-house analysts, outside consultants, or government employees—may not have subjected each individual measurement to the type of rigor and scrutiny that would be applied if litigation were anticipated. Again, this is something I have come across in my own practice—highly informal testing done in “peacetime,” with the results later playing potentially pivotal roles in the ensuing litigation that no one expected at the time.

Additionally, one key type of error—sampling error—can arise at a stage before what we often think of as the “testing” even begins. In many measurements, the analysis is performed on one or more samples, with the understanding that the results can be extrapolated or generalized to tells us something about the body from which the samples were taken. In assessing water, for example, a tiny fraction of the relevant water body is analyzed; but the result from that fraction is meaningful only to the extent that it accurately represents the body of water as a whole.

Whenever this type of measurement is done, the selection and collection of the samples is a critical, though often overlooked, part of the process. The samples must be representative—that is, it must be reasonable to assume that the samples’ properties are indicative of those of the broader body. (To use an obvious illustration: If we wanted to determine the average height of adult American males, we wouldn’t use a professional basketball team as our sample set.)

Moreover, it is not enough for the samples to be representative—they also must be collected, handled, and stored in a way that preserves both their individual properties and their representativeness until they are analyzed. This can be particularly critical if the relevant property is something that tends to change relatively quickly over time. Organic material, for example, often decays fairly rapidly. It can also be affected by how stably the relevant analyte is dispersed throughout the sample. For example, when a soluble substance such as salt is dissolved in water, it will generally tend to stay dissolved over time and will be evenly dispersed, so that the concentration of salt will be approximately the same throughout the sample. By contrast, insoluble materials, such as suspended solids, will not necessarily be evenly dispersed and may also tend to settle out over time, making it more difficult to ensure that the analysis of the sample accurately represents the composition of the relevant water body.

In some circumstances, the very nature of the event that leads to the litigation will make proper sampling difficult. This can be an issue in, for example, cases involving asbestos in building materials. Many asbestos cases are initiated after a demolition or some other disruptive event has taken place. This makes it difficult to follow any systematic sampling procedure, with the result being that any ensuing measurements may be suspect.

Additionally, beyond sampling, the analysis itself will be performed (at least at some level) by human beings who can make mistakes. I have seen errors as basic as using the wrong method for a particular purpose, mathematical miscalculations, and even an analyst forgetting to add the sample to a reaction mixture. Of course, if any underlying step is performed improperly, the resulting measurement can be skewed.

But we are still not done. Even assuming that the method is generally reliable and was properly applied, the particular conclusions drawn from any analysis must still be carefully scrutinized. This is true both qualitatively and quantitatively.

Qualitatively, the conclusions that can be drawn from a measurement depend on how specific the measurement is. Some tests—for example, certain chromatographic methods—are relatively specific, responding only to individual substances (or small groups of closely related substances). If such a test is performed properly, the results can legitimately be interpreted as showing the presence or concentration of the respective substance with some reasonable degree of certainty.

Other tests, however, are comparatively nonspecific. For example, the BOD test mentioned above will “pick up” (i.e., measured values will be increased by) a wide variety of substances—including just about any kind of plant- or animal-related matter, along with iron and various other inorganic substances. Other water measurements such as conductivity and suspended solids can be similarly nonspecific. Accordingly, an elevated value for such a measurement may not provide much evidence of the presence or concentration of any particular component.

Many measurements also will have to be quantitatively meaningful in order to be significant—in other words, the actual measured values must say something relevant about the factual question at issue. This will depend on factors such as a measurement’s uncertainty and any applicable baseline levels.

Uncertainty is an inherent aspect of any technical measurement. Regardless of the type of test at issue, it is scientifically inaccurate to describe a result in terms of a single number, such as “The concentration of component X was 4 parts per million.” Instead, an accurate description will include a description of the uncertainty associated with the measurement; for example, “We are 95 percent confident that the concentration of component X was between 3.6 and 4.4 parts per million.”

This is necessary because the reader needs to understand the true significance of any reported number. Environmental testing methods vary enormously in their uncertainty levels. Some are relatively low, on the order of, for example, plus or minus 10 percent as in the (overly simplified) 3.6–4.4 example above. Others can have uncertainties several times the measured value itself—such that, for example, a measured value of 4 might allow an analyst to conclude only that the actual value is probably somewhere between 0 and 40. Understanding uncertainty is obviously critically important when, for example, deciding whether a measured value of 4 parts per million (ppm) is sufficient to support a finding of a violation where the permissible level was 3 ppm.

In addition, depending on the type of analysis being done, the baseline amount of the relevant analyte may be a necessary part of the analysis. For example, a water pollution allegation may be based on evidence that some property of the surrounding water has increased. If so, it is obviously necessary to know what that property would be in the absence of pollution—not only the average value, but the range that might exist as a result of natural ebbs and flows. Tying together some of the concepts discussed above, the measurement value should only support an inference of pollution if (1) the measured value is outside that expected range; (2) the difference is meaningful in light of the uncertainties associated with both the measured value and the expected range; and (3) the test is sufficiently specific to connect the observed results to some substance relevant to the case.

The baseline determination can be difficult, especially when the media at issue do not have useful comparators available. In my water pollution case, for example, one allegation was that the stormwater coming off of my client’s property had “high” measurements for BOD and various other values and, therefore, must have included impermissible contaminants from the client’s business operations. When pressed, however, none of the government’s witnesses could articulate what the ranges for those values would have been for the stormwater absent the alleged pollution.

Finally, along with all of this, it must be said that any particular expert witness’s own qualifications are fair game for inquiry. Particularly in light of the wide range of tests that may be used in environmental litigation, the “expert” in a given case may not have the full breadth of specifically relevant expertise that one would normally expect from a litigation expert. In my case, for example, the prosecutors wanted to call a purported expert to explain how the presence of algae downstream from my client’s operations was evidence of pollution. When we challenged his qualifications, he admitted that he had never read a single publication relating to this topic; his alleged expertise was based in part on “you know, kind of first-hand experience of, you know, visiting places like an aquarium where you see algae and stuff growing.”

With all of this in mind, it is clear both that scientific evidence can be a real turning point in an environmental case and that its depth and complexity call for serious, dedicated attention from the lawyers litigating that case. Over the years, I have come to believe strongly in a set of principles for dealing with scientific evidence in environmental and other cases, including the following:

Do not assume anything. As I mentioned above, our natural deference to scientists and experts can tempt us to assume, when presented with a set of measurements purportedly establishing some fact, that those measurements and the corresponding conclusions are valid. But making such an assumption can be a big mistake. For the reasons discussed above and others, even apparently qualified and well-intentioned analysts may reach conclusions that are open to serious questions on multiple grounds.

This is true even when the people in question seem like they ought to know their stuff. The water pollution case discussed above, for example, involved a virtual army on the government’s side, with representatives of the State Attorney General’s Office, the State Department of Environmental Quality, and the U.S. Environmental Protection Agency. Do not be intimidated, and do not simply assume that the people involved have done everything correctly on the technical side or that their interpretations of the data are legitimate.

Learn the science. In my opinion, every lawyer with a case that may turn on a scientific evidence issue needs to have a solid understanding of the technical principles underlying that evidence. This is not something that can be “outsourced” to experts and consultants—those people can provide invaluable assistance, but they are no substitutes for the lawyer’s own mastery of the subject matter.

The lawyer needs to understand that evidence well enough to work it into his or her overall case theory and narrative and to identify particular facts that make the evidence look more or less compelling. If a procedural misstep is identified in our own or our adversary’s testing, for example, we need to know how that might affect the validity of the ultimate results. If a sample is kept for too long or at too high a temperature before analysis, how might that affect the results? Is the analyte at issue stable enough so that the measurement will still be reliable, or is it likely to have degraded or otherwise changed so as to call that measurement into question? If there is some likely change, how much and in which direction would the change likely be? Are the questions serious enough to affect the admissibility of the evidence, or will they go only to weight? These can all be critical questions, and they can only be answered by an attorney familiar with all of the scientific, factual, legal, and procedural issues in the case.

Master the entire story. As discussed above, there is often at least one disputable point within a theory based on scientific evidence. That point can come up at any stage in the logical progression, and you will not be able to identify it unless you have mastered the whole story. Most of the examples above involved issues with the application of a method or the interpretation of results, but those are not the only points at which legitimate questions can arise.

You may find, for example, that an entire testing method rests on a questionable scientific footing. This is arguably the case, for example, with certain methods for measuring asbestos content. In one of my criminal asbestos pollution cases, I wanted to know how precise and accurate the asbestos measurement methods used in the case were, so I started digging into the relevant literature. I first obtained the government’s procedural manuals and other technical materials through public records requests. As is often the case, these materials did not themselves outline the empirical studies underlying the methods, but instead cited previous materials. I found those materials, which in turn cited still earlier materials, and so on. In the end, when I finally reached what appeared to be the beginning of the chain, I was genuinely surprised to see how little genuine scientific analysis supported these methods that have been used for decades.

In the end, scientific evidence should be treated like any other evidence, with its probative value determined by its quality, consistency, and connection to the specific issues in a particular case. Some of the scientific evidence that finds its way into environmental litigation is legitimate and entirely valid, some is essentially worthless, and the rest is somewhere in between. For lawyers litigating these cases, our responsibility is to master this evidence so that it ends up playing an appropriate role and carrying an appropriate degree of weight—neither more, nor less, than it deserves.

Advertisement

  • About NR&E

  • Additional Resources

  • Contact Us

Stay Connected

    

Book: Ethics and Environmental Practice: A Lawyer's Guide