chevron-down Created with Sketch Beta.
October 16, 2014 Articles

Consumer Surveys and Other Market-Based Methodologies in Patent Damages

Changes in demand of analyses of patent damages may force "traditional" damages experts to give way to marketing experts and economists

By Shankar Iyer

Once validity and infringement of the patent in suit is assumed, the central inquiry in calculating economic damages is one of cause and effect. Indeed, patent damages analysis is not much more than a hunt for a principled causal relationship. To wit, it is this causal relationship that the damages expert is asked to summon in the counterfactual world that is Georgia Pacific. Courts recognize that this is easier said than done: "Determining a fair and reasonable royalty is often . . . a difficult chore, seeming often to involve more the talents of a conjurer than those of a judge." ResQNet.com v. Lansa, 594 F.3d 860 (Fed. Cir. 2010).

The chore becomes harder in the case of technologies that have many interworking components. The more complex a product or service, the more heightened the scrutiny of the evidentiary link between the patented invention and consumer demand. While it is tempting to attribute such scrutiny to the nature of twenty-first-century technology, note this lucid caution from the publication year of Mark Twain's Adventures of Huckleberry Finn:

The patentee . . . must . . . give evidence tending to separate or apportion the defendant's profits and the patentee's damages between the patented feature and the unpatented features, and . . . the profits and damages are to be calculated on the whole machine, for the reason that the entire value of the whole machine, as a marketable article, is properly and legally attributable to the patented feature.

Garretson v. Clark, 111 U.S. 120 (1884).

Many an evidentiary link between the patented technology and consumer demand has foundered on the rocky shoals of insufficient data, non-causality, or just plain bad methodology. Numerous attempts at quantifying damages have involved "speculative and unreliable evidence divorced from proof of economic harm linked to the claimed invention," ResQNet.com v. Lansa, 594 F.3d 860 (Fed. Cir. 2010), or unsound methodologies untethered to the patented invention's "footprint in the marketplace," Uniloc USA, Inc. v. Microsoft Corp., 632 F.3d 1292 (Fed. Cir. 2011).

The inexact art of identifying comparable licenses is also under attack. In ResQNet, the court found improper the use of previous licenses because such use was not predicated on "factual findings that account[ed] for the technological and economic differences" between the economic value of the licenses and that of the patented invention. In the midst of this turmoil one can hear an unmistakable clarion call for more market-based evidence. (The reader is invited to refer to a rich body of literature that reads the tea leaves from extant case law and offers both proscriptions and guidance. See, e.g., Brian J. Love, "Patentee Overcompensation and the Entire Market Value Rule," 60 Stan. L. Rev. 263, 293 (2007); Eric E. Bensen & Danielle M. White, "Using Apportionment to Rein in the Georgia-PacificFactors," 9 Colum. Sci. & Tech. L. Rev. 1, 21 (2008); Patricia Dyck, "Beyond Confusion—Survey Evidence of Consumer Demand and the Entire Market Value Rule," 4 Hastings Sci. & Tech. L.J. 209, 237 (2012)).

What Is Market-Based Evidence?

But what exactly is market-based evidence? Below, I briefly discuss certain methodologies from economics and marketing science that are being tried out in recent patent litigation or are likely to be tried in future litigation. Some of these methodologies are analytically rigorous, are rooted in business practice outside the litigation context, and carry peer-reviewed academic heft. Carefully applied, these methodologies lend themselves to the sorts of damages analyses that courts are increasingly demanding from experts.

A defining characteristic of these methodologies is that they are data-driven. In one class of methodologies ("survey-based methodologies"), these data are gathered from individual respondents' statements or choices in response to stimuli provided by the researcher. Accordingly, survey-based methods are based on preference indicators in hypothetical scenarios. Rank, rating, or choice intentions are examples of preference indicators. Survey-based methods can directly elicit preferences for new alternatives. The other class of methodologies ("revealed preference methodologies") is based on actual market behavior (such as buying a particular brand of cereal or smartphone). Here, the preference indicator is actual choice, and for that reason, the researcher cannot directly predict responses to new alternatives. Survey-based and revealed preference methodologies can complement each other and tend to operate with a similar set of statistical and mathematical techniques.

Survey-Based Methodologies          

A straightforward example of a survey-based methodology is a direct elicitation survey. In a direct elicitation survey, the researcher may pose a question or a brief scenario to the survey respondent and ask the respondent to choose a single choice from a set of multiple choices or ask a series of "Yes/No/Don't Know" questions. The survey may ask a respondent to rank a set of features from most important to least important. The survey may contain a section where the respondent is invited to answer unaided questions ("What features of your laptop do you use most frequently?") through a series of probes ("Are there any other features that you are interested in exploring?"). The virtue of direct elicitation surveys is simplicity: They are direct and easy to administer, generally do not involve heavy-duty statistical analysis of survey data, and are reasonably inexpensive. Therefore, they are an attractive way to present hypothetical scenarios to respondents and query the extent to which there is demand for a patented feature.

However, simple direct elicitation methods, especially when carelessly implemented, are subject to biases. For example, if the survey is not carefully designed, the researcher may improperly elevate the relevance or importance of a particular feature by focusing the respondent on that feature. Another challenge for direct elicitation surveys is to approximate the purchase content: To the extent a direct elicitation survey strays too far from the "moment of truth" of the purchase decision, it is subject to attacks of reliability and validity.

A more involved survey could elicit the relative importance of product features. In such a survey, the expert first conducts a careful review of primary and secondary features based on product characteristics that are identified as demand drivers in market research involving the accused products. Potential sources for such market research include the manufacturers' own surveys and product websites; expert reviews, specifications, and buying guides featured on popular websites (in consumer electronics, for example, this might include Consumer Reports, CNET, Digital Trends, Gizmag, PCMag, the Verge, etc.); and side-by-side product comparisons from vendors' websites. This is followed by a constant sum importance survey, where the respondent is asked to allocate points to various product features. The constant sum ("dividing the fixed pie") nature of the survey forces respondents to think in terms of relative importance. This is an improvement over a naïve direct elicitation importance survey where consumers are presented with only the patented feature and asked about its importance. A properly executed constant sum importance survey also has the advantage of presenting a much more complete product (or service) context for the respondent.

Other techniques, such as maximum difference scaling, are also available to the researcher to infer scaled ranks (whereby features are not only ranked, but one can also determine how much better a feature ranked fourth, for example, is than a feature ranked fourteenth). This kind of survey works best when the total number of features is of manageable size. It can be especially powerful if there are existing indicia of value for non-patented features, which can then be numerically compared with the features covered by the patent in suit.

Market researchers have developed and continually refined methodologies that minimize biases associated with poorly constructed direct elicitation surveys. Conjoint analysis is one such methodology that has recently received considerable attention in high-profile patent disputes. See, e.g., Apple v. Samsung Elecs. LTD (Apple v. Samsung I), No. 11-1846, 2012 WL 2571719 (N.D. Cal. June 30, 2012). But see Oracle Am. v. Google, No. 10-3561, 2012 WL 850705 (N.D. Cal. Mar. 13, 2012) (rejecting the use of conjoint analysis).

Conjoint analysis was systematically developed starting in the early 1970s in the field of marketing and is generally recognized by academics and industry practitioners as the most widely studied and applied method of quantifying consumer preference. It has been shown to provide valid and reliable measures of consumer preferences as well as forecasts of consumer behavior. See John R. Hauser & Vithala Rao, "Conjoint Analysis, Related Modeling, and Applications," in Advances in Marketing Research: Progress and Prospects 141–168 (Jerry Wind & Paul Green eds., 2004) (providing an overview of conjoint analysis). Conjoint analysis has been used to measure consumer preferences for features of complex products such as smartphones, automobiles, and GPS devices in numerous peer-reviewed academic studies. See, e.g., Peter J. Lenk, Wayne S. DeSarbo, Paul E. Green & Martin R. Young, "Hierarchical Bayes Conjoint Analysis: Recovery of Partworth Heterogeneity from Reduced Experimental Designs," 15 Marketing Sci., No. 2, 173–191 (1996).

The central idea behind conjoint analysis is that consumers' preferences for a product can be decomposed into the features of the product. Therefore, by asking consumers to choose between different hypothetical products—that vary in one or more features—the researcher is able to quantify the "contribution" of an individual feature to the overall product. Depending on the research question being asked, the researcher can use one or more types of conjoint analysis. For example, in Apple v. Samsung I, John Hauser of the Massachusetts Institute of Technology used a state-of-the-art type of conjoint analysis known as choice-based conjoint (CBC) analysis.

The first step in a CBC analysis is to introduce survey respondents to the features that will be tested in the survey. To avoid focus bias, the researcher usually selects a number of "distraction" features: These are features unrelated to the patent in suit but nevertheless important in making the choice task realistic for a respondent. To increase the immediacy of the survey, the researcher selects a realistic set of features to include in the survey. This involves systematic prior research and pretesting.

Each respondent in a CBC survey performs multiple "choice tasks." In each choice task, respondents are shown hypothetical products, also known as "profiles." For example, in a survey of laptop computers, a respondent may be asked to choose among multiple different profiles (usually three or four), each representing a different laptop. The laptop profiles are constructed by varying the features of that laptop. Thus, laptop A may have a battery life of 10 hours while laptop B has a battery life of 15 hours. The respondent always sees the same high-level features ("battery life," in this example) in each of the choice tasks. That is, the respondent always sees four different profiles, each having battery life as well as other high-level features. What varies in the different laptop profiles is the "amount" of battery life—10 hours, 15 hours, and so on.

CBC requires respondents to make trade-offs in choosing a particular laptop profile: One profile may have superior battery life but may lack another feature of interest—weight, for example. The key analytical insight of CBC is that stated choices made by respondents at the product level allow the researcher to infer the relative contribution of features by statistically analyzing the rich set of trade-off data implied by respondents' choices. Accordingly, in this illustrative example, CBC is able to quantify the economic benefit of going from 10 hours of battery life to 15 hours of battery life. Of course, in a specific application, the survey expert has to carefully tie the benefits of the patented invention to the product (or service) feature that is presented to the survey respondent. Moreover, the purported benefit has to be net of available non-infringing alternatives. While carefully constructed conjoint analysis has withstood scrutiny in recent cases, it will no doubt be subject to continued challenges in other cases. (The reader may benefit from perusing Judge Koh's highly detailed and nuanced Daubert analysis in Apple v. Samsung Electronics LTD, No. 12-0630, 2014 WL 794328 (N.D. Cal. Feb. 25, 2014).)

Revealed Preference Methodologies

An alternative (or a complement) to survey-based methodologies is to rely on revealed preference data to identify what product features drive consumer demand and by how much. As noted above, revealed preference data are data based on actual product choices that have been made by consumers. For example, consumer Sue bought the 5.5-inch smartphone with a 32-gigabyte memory at $199 under a two-year contract, consumer Jim bought an unlocked 5.5-inch smartphone with 64-gigabyte memory at $649, and so on. Conceptually, these methodologies work by exploiting the variation in product features to tease out a relationship (not necessarily causal!) between a particular product feature and economic value (for example, a difference in purchase price). If the data gods are munificent, the expert is able to make a statement such as the following: Inclusion of the patented feature increases the market price by $17, all else being equal. The great advantage of revealed preference methodologies is that they are based on battle-hardened (in academia, industry, and litigation) techniques and, done properly, are likely to be looked upon kindly by courts.

But the data gods are not always munificent and the courts are not always kind. Stragent v. Intel, 11-421, 2014 WL 1389304(E.D. Tex. Mar. 6, 2014), is a recent case in point. In Stragent, the plaintiff's expert used the well-established method of multivariate hedonic regression to estimate the value of the accused feature's contribution to product price. Unfortunately for the expert, a set of 19 relevant features was either collectively present or not present in the relevant Intel processors; no more granular data were apparently available. The expert then proceeded to assign equal weight to each of the 19 features to "apportion" the 42 percent effect of the set of 19 features on the average selling price. Circuit Judge Dyk, sitting by designation, deemed this approach to be arbitrary and not tied to the facts; the expert was unable to summon data where the patented feature was not always part of the same bundle of other features.

Another state-of-the-art methodology—and one that is used pervasively by data scientists in industry and academia—involves so-called sentiment analysis of online product reviews. This methodology relies on actual consumer sentiments expressed in online product reviews to identify product features that are important to consumers and measure the impact of improving these identified features on the demand for the accused products. In broad terms, online product reviews can be grouped into two categories: consumer reviews, generated by actual purchasers and users of the product (such as consumer reviews on Amazon.com), and professional reviews (such as a review on Consumer Reports' website). The data in these reviews stem from actual user experience.

Researchers working at the intersection of consumer behavior and computer science have developed techniques for text mining and sentiment analysis that usually involve three steps: data collection, text mining and sentiment analysis, and demand estimation. See, e.g., Nikolay Archak, Anindya Ghose & Panagiotis G. Ipeirotis, "Deriving the Pricing Power of Product Features by Mining Consumer Reviews," 57 Management Sci., No. 8, 1485–1509 (2011). First, the researcher identifies the specific Internet sources that have relevant product reviews of the accused products and uses automated web-scraping algorithms to extract relevant data. Then the researcher uses the online product reviews to generate a list of product features mentioned in the reviews. These features can be ranked by importance based on information obtained from reviews (for example, based on how frequently they are mentioned by consumers). The analysis of consumer reviews also identifies consumer sentiments about the features. The sentiments are then converted into ratings and used to measure whether and how sentiments about a certain feature differ across different product types and over time. Finally, the researcher uses sales information for the relevant products, together with the results of the second step, to estimate how improving product features affects consumer demand. This is usually achieved through a demand-side regression with the sales of the accused products as the dependent variable and consumers' sentiments on identified features as the independent variables.

The attractiveness of text mining and sentiment analysis is obvious in patent damages contexts where online reviews are plentifully available: The data being analyzed are actual online product reviews and are not subject to the cut and thrust of discovery; the methodology is at the frontier of data science; applications in business are mushrooming; and, unlike survey-based methodologies, sentiment analysis analyzes data from many thousands of online product reviews. However, application of this methodology is likely to work best when the expert is able to tie the relevant variation in revealed preference data to the benefits of the patented invention. This may require additional analysis including a complementary survey.

Conclusion

The call for market-based, empirical evidence of consumer demand has raised the level of rigor in the analysis of patent damages. Both survey-based evidence and revealed preference methodologies are likely to be increasingly proffered by plaintiffs and defendants. In short, the damages expert will not be what he or she used to be. We will see more use being made of marketing experts and economists as complements and perhaps eventually as substitutes for "traditional" damages experts. These may be the best of times and the worst of times to be involved in patent damages.

Keywords: litigation, intellectual property, patent, damages, market-based evidence, surveys, experts


Copyright © 2014, American Bar Association. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or downloaded or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. The views expressed in this article are those of the author(s) and do not necessarily reflect the positions or policies of the American Bar Association, the Section of Litigation, this committee, or the employer(s) of the author(s).