March 31, 2017 Practice Points

Ensuring Validity and Admissibility of Consumer Surveys

Surveys can be a useful method through which to deliver evidence, particularly when other sources of data are not available.

By Rebecca Kirk Fair and Laura O’Laughlin – March 31, 2017

Surveys can be a useful method through which to deliver evidence, particularly when other sources of data are not available. Consumer surveys have been offered as evidence in trademark-infringement matters for decades, and are gaining in prominence across practice areas, including patent litigations, false advertising and consumer protection cases, and employment-related class actions.

The relevance and usefulness of expert-submitted surveys in any legal context, though, is dependent on how they are designed and implemented. A recent decision from Judge Richard Posner of the U.S. Court of Appeals for the Seventh Circuit highlights some of the pitfalls of using surveys in litigation:

Consumer surveys conducted by party-hired expert witnesses are prone to bias. There is such a wide choice of survey designs, none foolproof, involving such issues as sample selection and size, presentation of the allegedly confusing products to the consumers involved in the survey, and phrasing of questions in a way that is intended to elicit the surveyor’s desired response—confusion or lack thereof—from the survey respondents.

As the opinion makes clear, the avoidance of bias, either in fact or appearance, is central not only to a survey’s admissibility but also to the probative weight accorded to the survey expert’s testimony. Put simply, valid surveys require a survey expert to ask the right people the right questions in the right way. If asurvey fails in any one of these areas— the population sampled, method, or implementation—itmay suffer from one or more biases.

Consider three categories of potential biases:

  • Selection biases relate to the population studied (i.e., did the expert seek out and ask the right people using statistically valid sampling techniques?).
  • Information-related biases relate to which questions are asked, how the questions are asked, and what answers are offered.
  • Analytical biases relate to how the data are analyzed, such as the interpretation of open-ended responses.

To encourage acceptance by courts, the survey expert must take affirmative steps to verify their use of relevant survey design, show their use of accurate sampling techniques, and demonstrate how they minimized or avoided the potential for bias that might impact the survey results. Although experts may recover from errors resulting from analytical biases and in some cases information-related biases, it is nearly impossible to recover from selection-related biases that result in a failure to identify the right population. There is no way to know, with any degree of certainty, whether selection-related errors bias the results and whether the bias overstates or understates the results. A valid survey must study the right population; otherwise, the results are irrelevant.

Practice Points:

  • A survey in aid of litigation will have greater probative value if the expert can document and support the choice of question, sample, and method while demonstrably minimizing the possibility for biases that can “tweak” the survey method in his or her favor.
  • The survey expert must define, target, and sample from the segment of the population whose beliefs are relevant to the issues in the case; otherwise, the survey may be open to critiques of selection bias. If the wrong people are asked, the results are likely to be irrelevant, and the data may be excluded.
  • An appropriate and admissible survey should be grounded in an academically rigorous and unbiased methodology, matching the design and the questions to the objective.
  • The survey expert’s decision process in determining how questions are asked should be made as transparent as possible to the trier of fact.
  • Consider including complementary evidence to demonstrate that the results of a survey are consistent with other data or economic theory. Such evidence may also be helpful in demonstrating that data and conclusions are only minimally affected (if at all) by possible sources of selection bias, informational bias, and/or analytical bias.

Rebecca Kirk Fair is a managing principal at Analysis Group in Boston, Massachusetts. Laura O’Laughlin is a senior economist at Analysis Group in Montreal, Quebec, Canada.

Copyright © 2017, American Bar Association. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or downloaded or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. The views expressed in this article are those of the author(s) and do not necessarily reflect the positions or policies of the American Bar Association, the Section of Litigation, this committee, or the employer(s) of the author(s).