chevron-down Created with Sketch Beta.
March 08, 2017 Articles

Ensuring Validity and Admissibility of Consumer Surveys

Reviewing relevant cases and literature and discussing best practices for the use of expert-prepared consumer surveys as part of litigation strategies.

By Rebecca Kirk Fair and Laura O’Laughlin – March 8, 2017

Consumer surveys have been offered as evidence in trademark infringement matters for decades. The U.S. Court of Appeals for the Ninth Circuit noted in 2015 that surveys are now “de rigueur in patent cases” as a tool to evaluate and quantify damages relating to alleged infringement, highlighting the increasing acceptance of surveys across practice areas. Sentius International LLC v. Microsoft Corp., 2015 WL 331939 (N.D. Cal. Jan. 23, 2015). In addition to being used to provide evidence on drivers of consumer demand in recent high-profile patent litigations involving firms such as Apple, Microsoft, Samsung, Oracle, and Google, consumer surveys are also being used to evaluate the presence or extent of consumer harm in false advertising and consumer protection cases. Such surveys might explore how consumers’ purchase decisions may change if a product were advertised in a different manner or if the claims by a competitor were narrowed. See, e.g., Aviva Sports, Inc. v. Fingerhut Direct Marketing, Inc., 829 F. Supp. 2d 802 (D. Minn. 2011); Millennium Laboratories, Inc. v. Ameritox, Ltd., 924 F. Supp. 2d 594 (D. Md. 2013). Furthermore, such surveys can be used to fill an evidentiary gap in employment-related class actions, such as for missing or incomplete employment records or a lack of documentation as to the pay and promotion decisions of a large group of managers. See, e.g., Tyson Foods, Inc. v. Bouaphakeo et al., 136 S. Ct. 1036, 1046–49 (2016); Wal-Mart Stores, Inc. v. Dukes et al., 564 U.S. 338, 356–57 (2011). In such cases, admissibility of the survey may depend on the relevance of the response group and whether a statistical sample is sufficient to determine class-wide liability.

The relevance and usefulness of expert-submitted surveys in any legal context, though, is dependent on how they are designed and implemented. A recent decision from Jude Richard Posner of the U.S. Court of Appeals for the Seventh Circuit highlights some of the pitfalls of using surveys in litigation:

Consumer surveys conducted by party-hired expert witnesses are prone to bias. There is such a wide choice of survey designs, none foolproof, involving such issues as sample selection and size, presentation of the allegedly confusing products to the consumers involved in the survey, and phrasing of questions in a way that is intended to elicit the surveyor’s desired response—confusion or lack thereof—from the survey respondents.

Kraft Foods Group Brands LLC v. Cracker Barrel Old Country Store, Inc., 735 F.3d 735 (7th Cir. Ill. 2013).

As the opinion makes clear, the avoidance of bias, either in fact or appearance, is central not only to a survey’s admissibility but also to the probative weight accorded to the survey expert’s testimony. Bias may sometimes be obvious; at other times, it may be difficult to detect. This article discusses possible sources of bias and describes methods and techniques that a survey expert can use to minimize this bias.

Bias Defined
Valid surveys require a survey expert to ask the right people the right questions in the right way. In other words, a survey expert must implement an appropriate method to accurately measure the construct of interest, all while sampling from an appropriate population. If a survey fails in any one of these areas—method, implementation, and population sampled—it may suffer from one or more biases.

To encourage acceptance by courts, the survey expert must take affirmative steps to verify that careful and relevant design and sampling techniques were used, demonstrating that potential biases have been avoided. Consider three categories of potential biases:

1) Selection biases relate to the population studied (i.e., did the expert seek out and ask the right people using statistically valid sampling techniques?).

2) Information-related biases relate to which questions are asked, how the questions are asked, and what answers are offered.

3) Analytical biases relate to how the data are analyzed, such as the interpretation of open-ended responses.

In certain instances, if biases are introduced through the analyses of survey results, alternative analyses could be conducted using the same data. Experts may even recover from errors resulting from information-related biases—an imperfect question, for example, may still provide relevant information. However, it is nearly impossible to recover from selection-related biases that result in a failure to identify the right population. A valid survey must study the right population; otherwise, the results are irrelevant. See Bank of Utah v. Commercial Security Bank, 369 F.2d 19, 27 (10th Cir. 1966) (“A survey is inadmissible when the sample is clearly not representative of the universe it is intended to reflect.”).

The assessment of bias in court cases is particularly critical; recent expert reports and court opinions have revealed an increasing emphasis on demonstrating adherence to best practices.

Opinions of Relevant People
A key element of a reliable survey involves identifying the appropriate “universe” of respondents from which to draw. The expert must define, target, and sample from the segment of the population whose beliefs are relevant to the issues in the case; otherwise, the survey may be open to critiques of selection bias. If the wrong people are asked, the results are likely to be irrelevant, and the data may be excluded.

If the universe is not appropriately defined, the resulting sample of respondents may be either overly broad (overinclusive) or overly narrow (underinclusive), either of which may lead to the exclusion of survey results from evidence. For example, the survey expert in a recent class action matter defined the target universe as “the population of [appliance] owners” but failed to provide a viable method to sample from this population to obtain reliable results. In the order excluding this expert’s testimony, the court found that the

[expert] cannot say much of anything about who answered his internet survey . . . . [The expert] can’t say for sure whether any survey-takers actually owned [the appliance at issue]. Identifying data was not requested, such as serial number or other criteria tending to establish that the survey responder really owned the product.

In re Front Loading Washing Mach. Class Action Litig., 2013 WL 3466821, at *7 (D.N.J. July 10, 2013).

On the other hand, in another recent matter, the sampling of a relatively small group of employee plaintiffs was accepted as a means of establishing a class, in part due to the absence of any other practicable means of collecting relevant data. Tyson Foods, Inc., 136 S. Ct. at 1046–49.

Errors related to sampling are particularly problematic because there is no way to know, with any degree of certainty, whether these selection-related errors bias the results and whether the bias overstates or understates the results.

Academically Rigorous and Unbiased Methodologies
An appropriate and admissible survey should be grounded in an academically rigorous and unbiased methodology, matching the design and the questions to the objective. Once the key questions are identified, the survey expert should consider the most appropriate approach to assess these questions.

For example, if the objective is to assess the impact on consumer behavior of particular claims in advertising in a consumer confusion matter, a “test-and-control” experimental design is often the best choice as it can help isolate whether there is a causal link between the claims and consumer behavior. A test-and-control design can also isolate the impact of a false claim relative to a more narrow (but accurate) claim. The “Eveready” trademark survey design, based on a survey used in Union Carbide v. Ever-Ready Inc., 531 F.2d 366 (7th Cir. Ill. 1976), is an early example of the acceptance of test-and-control design. In that matter, the Seventh Circuit determined that the district court had erred when it found that surveys were entitled “to little, if any, weight” and affirmed the value of surveys in determining whether there exists a likelihood of confusion between two products.

Still, an accepted design is not a panacea: a survey written in an overly broad manner, even if based on a standard methodology, may be deemed inadmissible. In Fractus, S.A. v. Samsung Electronics Co., for example, a broad survey was excluded because it confused the issue, risking a jury award based on the total value of a cellular phone component rather than the value of the at-issue single aspect of the component. C.A. No. 6:09-cv-203-LED-JDL (E.D. Tex. Apr. 29, 2011).

Appropriate and Unbiased Implementation
As Judge Posner noted, survey evidence, like most expert-presented evidence, is generally sponsored by a party in litigation. To avoid informational biases, the right survey questions must be asked in the right way. Recent litigation outcomes also suggest that the survey expert’s decision process in determining how questions are asked should be made as transparent as possible to the trier of fact. Key design choices include question phrasing, survey methodology, experimental design, and survey administration. Practically speaking, a survey in aid of litigation will have greater probative value if the expert can document and support the choice of question, sample, and method while minimizing the possibility for biases that can “tweak” the survey method in his or her favor.

The survey expert’s decision to use open-ended or closed-ended questions can have implications in terms of relevance, analysis, and potential for or perception of bias. Open-ended questions increase analytical complexity and may make it difficult to group responses effectively; alternatively, closed-ended questions might “push” respondents into an answer they would not otherwise have given, a concern expressed by a federal district court in Hubbard v. Midland Credit Management. 2009 WL 454989, at *3 (S.D. Ind. Feb. 23, 2009) (“More fundamentally, it is not clear that closed-end questions are the appropriate way to test for the type of alleged deception in this case. The court perceives a significant risk that the closed-end questions would push respondents to read more into the disputed letters than is actually there.”).

When phrasing questions, the survey expert should be wary of “unexpected meanings and ambiguities to potential respondents.” Shari S. Diamond, Reference Guide on Survey Research, in Reference Manual on Scientific Evidence 359, 387–88 (3d ed. 2011). Experts should endeavor to adopt “a critical attitude toward [their] own questions.” S. L. Payne, The Art of Asking Questions 16 (Princeton Univ. Press 1951). If questions are unclear or attempt to test too many factors at once, they “may threaten the validity of the survey by systematically distorting responses if respondents are misled in a particular direction.” Diamond, supra, at 388. Examples of distortion include questions that are framed in a way to prompt a “yes,” using nonblind interviewing protocols, or asking questions that inadvertently “tip off” the respondent to the researcher’s hypothesis.

Pretesting may also be used to validate and evaluate various design decisions, “to increase the likelihood that questions are clear and unambiguous,” and to demonstrate that the researcher took steps to minimize the possibility of unintended bias from an aspect of the survey or experiment. See Diamond, supra, at 388; see also Alan G. Sawyer, Demand Artifacts in Laboratory Experiments in Consumer Research, 1 J. Consumer Res. 20, 30 (Mar. 1975) (noting that pretesting allows the researcher to minimize the possibility that bias causes “the subject to perceive, interpret, and act upon what he believes is expected or desired of him by the experimenter”).

 Appropriate and Unbiased Survey Data Analysis
Different survey and experimental designs require different methods to analyze the data; these methods can be affected by analytical biases. In particular, surveys that include open-ended responses typically require careful and often subjective analysis in order to determine the results. To avoid introducing researcher bias, open-ended responses can be carefully analyzed by coders that are blind to the purpose of the study. When analyzing data, it may also be necessary to exclude certain categories of respondents with appropriate justification, such as respondents who always select the first option in multiple-choice answers, due to a suspicion that these respondents were not paying sufficient attention to the survey task. If, on the other hand, the expert excludes larger categories of respondents, such as consumers of particular products or consumers residing in certain regions, the reasons for such exclusions should be well documented and appropriately justified, and the effect of such exclusions should be tested and understood.

Cross-Validated Survey Results
To demonstrate that the results of a survey are consistent with other data or economic theory, survey experts and their teams can also provide complementary evidence. For example, data analyses—such as a hedonic pricing analysis or a before-and-after sales data analysis—may provide results consistent with those found in a survey. Fact witnesses, deposition testimony, and the evidentiary record, as well as economic theory, can also corroborate survey results. Such evidence may also be helpful in demonstrating that data and conclusions are only minimally affected (if at all) by possible sources of selection bias, informational bias, and/or analytical bias.

Conclusion
Proper vetting of survey evidence can be a crucial component of a litigation strategy. Hiring the right experts and following best practices can help ensure that survey evidence reaches the jury. Meanwhile, identification of design failures or biased samples and analyses can help to have faulty surveys excluded. Even if a survey contains notable flaws in implementation, analysis, or validation, however, case law in the Ninth Circuit and elsewhere establishes that juries are able to assess the impact of possible technical deficiencies on the probative value of a survey.

Courts have been and are likely to remain skeptical of surveys—and methodological flaws can hurt both admissibility and weight of impact. Recent decisions relating to the validity and admissibility of survey evidence highlight the necessity for adherence to best practices at every step. Overall, though, surveys can be a useful method through which to deliver evidence, particularly when other sources of data are not available.


Rebecca Kirk Fair is a managing principal at Analysis Group in Boston, Massachusetts. Laura O’Laughlin is a senior economist at Analysis Group in Montreal, Quebec, Canada.


Copyright © 2017, American Bar Association. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or downloaded or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. The views expressed in this article are those of the author(s) and do not necessarily reflect the positions or policies of the American Bar Association, the Section of Litigation, this committee, or the employer(s) of the author(s).