chevron-down Created with Sketch Beta.

Antitrust Magazine

Volume 35, Issue 1 | Fall 2020

Consumer Welfare and Privacy in Antitrust Cases—An Economic Perspective

Garrett Glasgow and Christopher Stomberg

Summary

  • Competition laws could plausibly be implicated if consumers feel they have no alternative but to share private information with a dominant firm that underinvests in protecting consumer data. 
  • Various economic methods are available to quantify the harm to consumers' privacy interests from unwanted use of their personal data. 
  • Where firms that are collecting, sharing, and selling consumer information are also using this information to provide goods and services that consumers want, any antitrust enforcement actions to enhance privacy interests must be balanced against any resulting increases in the price or reductions in the (non-privacy-related) quality of the products provided by the firms.
Consumer Welfare and Privacy in Antitrust Cases—An Economic Perspective
da-kuk via Getty Images

Jump to:

With increasing attention on the potential market power of technology and online companies, concerns over privacy have begun to overlap with traditional antitrust concerns in various ways. Many online companies linked to privacy concerns control networks with a large number of users through different online businesses (e.g., Facebook for social networking, Google for web searching). The data collection and use policies of these companies have thus attracted increasing attention from both antitrust agencies and plaintiffs in class action lawsuits.

Under which circumstances do these privacy-related concerns also become antitrust-related concerns? Some contend that privacy issues are not an antitrust concern, and instead are best handled through consumer protection laws. However, others have made the case that there are privacy issues that can arise specifically in antitrust cases. Most often, the argument is that a lack of competition forces consumers to sacrifice their privacy if they wish to take part in the digital economy.

In this article, we focus primarily on the analysis of consumer welfare issues related to privacy and antitrust from an economic perspective—specifically, how one can determine whether consumers are harmed through the collection, sharing, and selling of their personal information, how that harm can be quantified, and how one can determine whether a proposed remedy would increase consumer welfare.

Privacy-Related Consumer Harm in an Antitrust Setting

The most obvious type of privacy-related harm to consumers that arises from a lack of competition is the “forced” sharing of personal information. As a firm gains market power, it may be able to compel consumers to share personal information in order to use its product or service. Consumers might prefer to use a different product or service that requires less sharing, but a firm with market power can make alternative options harder to find, or nonexistent. The lack of competition on privacy-related matters is often described as reducing the quality of the good or service, with the dominant firm offering a lower-quality (less privacy-protective) product than it would offer with greater competition.

How exactly does the collection, sharing, and selling of consumers’ personal information affect consumer welfare in an antitrust setting? One possible source of harm arises from the increase in risk to consumers that arises from sharing personal information online, whether that risk comes from the unauthorized sharing of consumers’ personal information or a data breach.

Admittedly, a firm’s unauthorized sharing or failure to safeguard a consumer’s personal information is most likely a consumer protection rather than an antitrust matter. However, competition laws could plausibly be implicated if consumers feel they have no alternative but to share private information with a dominant firm that underinvests in protecting consumer data. Theories along these lines may be most applicable under EU antitrust law due to Article 102, which “prohibits abusive conduct by companies that have a dominant position on a particular market.” In the United States, the question would be whether specific restraints or conduct that enhance market power increase the ability to collect or use private information in a manner that puts personal data at risk, and whether that potential harm is outweighed by the potential procompetitive effect of the challenged practice.

Another possibility is that privacy itself has an intrinsic value to consumers. Even if personal information sharing did not carry any specific risk, consumers may still prefer not to share. In this case, feeling compelled to share personal information may reduce consumer utility in ways that can be hard to measure, such as by causing anxiety or offense.

Quantifying potential privacy-related harms dramatically improves the ability to assess whether consumer welfare has been harmed though a lack of competition. Measuring prices and changes in prices under different competitive scenarios is a bedrock of antitrust economics. However, many of the companies for which there may be privacy-related antitrust concerns do not directly charge consumers for their services. Instead, these companies often derive their revenue by collecting, sharing, or selling access to the personal data of their users. While these services are “free” from a monetary perspective, consumers are in fact “paying” for the service by agreeing to share their personal information. The economic relationship between platform and subscriber can persist for years with no explicit financial transaction. Services of the platform flow in one direction, while personal information and the value of network participation flow in the other, in what is essentially a barter transaction. The depth and pervasiveness of these barter transactions in the online world introduces a challenge to one of the most basic questions in antitrust analysis: what is the price?

Are firms exercising market power to “underpay” consumers in terms of goods and services offered in exchange for their personal information, or are consumers getting a good deal? In order to answer that question, we must first determine what privacy is worth to consumers. However, determining the value of privacy and personal information is notoriously difficult.

What Is Privacy Worth?

Determining the correct way to measure the value of privacy and personal information is a fundamental challenge, with different approaches to calculating this value often leading to very different answers. In surveys, consumers often indicate that they place a high value on their privacy and personal information. Simultaneously, these very same consumers often share their personal information for little or no apparent compensation, suggesting it may be of low value. This discrepancy between stated and revealed preferences for the sharing of personal information is frequently referred to as the “privacy paradox.”

One potential explanation for the privacy paradox is that it could be caused by a lack of competition—competition that would otherwise discipline companies to fully compensate individuals at market rates for their information. Under this scenario, consumers do place a high value on their privacy but feel compelled to share their personal information with little overt compensation because they have few alternatives. From this perspective, the firm would be exercising market power in its acquisition of key production inputs: consumer data and advertising targets. For example, in some cases, firms require users to confirm their identity by providing personally identifying information (PII) that includes the user’s name, date of birth, credit card information, and sometimes photos. Generally, subscribers to these services also must agree to terms of use that allow the platform to use or share other forms of potentially private information, such as browsing habits and network linkages. Users who refuse to provide this information may not be allowed to create or access accounts on these services. A social networking platform with market power, for example, could compel users to share valuable PII in order to access their network of friends, relatives, and co-workers—essentially agreeing to a price for this information below that which might prevail in a competitive market. The observation that consumers reveal a preference for sharing private data at low prices may thus be biased downward by a lack of competition in the marketplace.

An alternative explanation for the privacy paradox minimizes these antitrust concerns. Under this scenario, the discrepancy between statements and actions related to privacy might arise not because the revealed value of privacy is underestimated, but because the stated value of privacy is overestimated due to survey response biases. This explanation of the privacy paradox has received comparatively little attention in the economics literature, although there is research in other areas to support this viewpoint.

Survey-based methods such as contingent valuation (CV) have been developed to value environmental services and applied by researchers studying the value of privacy. Although CV has been widely used in environmental valuation, its use (and the high valuations it frequently produces) has been controversial among economists. For example, one recent study found that 23 percent of survey respondents stated that they were personally willing to pay at least $10,000 for a program to protect the Red Knot (a migratory shorebird species), even though only 12 percent of respondents were aware of its existence before the survey. The potential biases in CV and other survey-based methods are the subject of a large and growing body of economic literature.

Surveys on the topic of privacy that implicitly or explicitly inform respondents may be subject to response biases that could exaggerate the apparent value of personal information. For instance, survey questions on data sharing may signal to survey respondents that they should regard their personal information as valuable (an experimenter-demand effect) or lead survey respondents to downplay their willingness to trade privacy for convenience (a social desirability bias). These potential biases could be strengthened by media coverage on privacy issues that reinforces the idea that placing a high value on privacy is the “correct” and “responsible” viewpoint. These factors, along with the typical concerns that would apply to any survey, such as question framing effects and sample representativeness, potentially introduce scope for other biases in the valuation of privacy.

Another explanation for the privacy paradox is that the value of privacy to consumers is neither “high” nor “low”—rather, it is context-dependent. There is evidence that consumers generally place high value on protecting their personal information but are also perfectly willing to share their personal information in specific circumstances. In one lighthearted example, a performance artist at an arts festival in Brooklyn offered a free cookie to any attendee willing to provide sensitive PII, such as driver’s license information. A surprising number of people were willing to make this trade: nearly one-third of the participants in this informal experiment were willing to be fingerprinted in return for a cookie, even though the artist refused to say what she would do with the information.

The individuals in this example may have been willing to divulge their PII because they perceived the risk of sharing to be low. Academic research has shown that an individual’s willingness to divulge personal information depends on his or her perceptions of the risks and consequences associated with sharing with a particular individual or company. These perceptions, and thus the value of privacy, are unlikely to be uniform across companies that collect consumers’ personal information. This adds an additional level of complexity to any antitrust case, as an “overpayment” in terms of personal information in one case might be viewed as a fair price by consumers in another. Studies have also shown that attitudes regarding the value of privacy can be cultural and generational, with valuations varying widely depending on age, country, and other demographic factors.

The value of privacy to consumers also appears to vary depending on whether consumers are asked to consider how much they would pay to avoid sharing their personal information (a query regarding their “willingness to pay,” or WTP) or how much compensation they would demand if their personal information was shared (an assessment of their “willingness to accept,” or WTA). For example, one recent study found that consumers were willing to pay $5 per month to maintain data privacy but would demand $80 to allow access to their personal data. These types of WTP-WTA differences in valuations are common, and they may be especially important when considering proposed remedies for alleged antitrust behavior. For instance, which remedy is best––the establishment of more privacy protection, or compensation for consumers for the past sharing of information?

The discussion above should not be taken to mean that determining the value of privacy in any particular case is impossible. In fact, there are numerous studies that have used conjoint analysis or other types of survey experiments to calculate the WTP to avoid sharing personal information when purchasing various types of goods and services. These same methods could be applied to antitrust cases, although with some of the caveats noted above. Other challenges are likely to arise when attempting to operationalize this information for use in an antitrust context, particularly for technology and online companies.

The Price of Privacy in Antitrust

Valuation is not generally the same as price; valuation is a building block of demand (and of supply, in a barter market). In the transaction to sign up on a digital platform, the consumer balances the value of services received from participation in the platform against the value of information to which the platform requires access (e.g., personal information, people he or she knows, content he or she shares). Even when a monetary value can be assigned on both sides of this benefit/cost equation, that value will vary across individuals—perhaps substantially. Studies frequently show wide variation in the valuations that people give to their private data. Of course, the valuations of any type of good or service can vary across individuals, yet still produce a market price. However, determining a “fair market price” for privacy and personal information on a digital platform will pose a challenge.

YouTube provides a useful example of a dynamic that is similar across many platforms and that has implications for any approach to determining the value of privacy and personal information. Many, possibly the majority, of YouTube users never upload a single video of their own—they simply consume video. In 2019, 82 percent of YouTube users reported using the platform for entertainment; 23 percent, for news; 18 percent, to follow brands or companies; and 8 percent, to keep in contact with friends and families. The quid pro quo is the information users provide and the advertising that is steered toward them. Some users who do create content may use YouTube to share a handful of videos with a small network of friends. Businesses may use YouTube as a repository for instructional and marketing material. Still other users may be celebrities with millions of channel subscribers. The balance of value in each case could be quite different. In the extreme, the YouTube celebrity provides large amounts of content and builds a dedicated following of viewers. In return, the celebrity receives a stream of income from the platform as a function of the popularity of the celebrity’s material and the advertising it supports. From among these disparate data points, what is the prevailing price of private information? This creates an analytical challenge from an antitrust perspective.

Suppose a platform firm with market power could exercise monopsony power by forcing subscribers to part with their information at a price lower than would prevail in a competitive market (by providing less valuable service, taking less care with private data, etc.). A key concern, from both a liability and a damages perspective, would be determining the prices that would prevail under competition—specifically, how individual valuations of privacy and personal information translate to market prices. Since there is no overt market price signal to begin with, and potentially many heterogeneous valuations with which to contend, this will be challenging. Average valuations across consumers would not necessarily be relevant, since in a competitive market it is marginal consumers and suppliers that dictate market price, and there is no guarantee that the marginal consumer’s valuation is the average.

This does not necessarily constitute a roadblock, particularly if there exist benchmark markets or time periods reflective of competitive transactions. Although differences in privacy terms may be detectable, they are likely to have multiple dimensions—such as the extent of information collected, how it is shared, with whom it can be shared, what is done with the information, etc. It is conceivable that the valuation methodologies described above could be used in this more concrete comparison to put an average value on that difference, which if representative of affected consumers, would provide some evidence of potential harm and a measure of damages.

My Personal Data as Your Engine of Market Power

The private data that a firm might obtain from consumers can also contribute to its market power. We have focused on the consumer data side of the platform business, which often serves as the foundation on which platform companies build their profit-making enterprise. On the other side of this market paradigm are business customers, who might be looking to identify high-value targets for advertising and other purposes. The personal data provided by users upon registering is perhaps less important from a value perspective than the subsequent data collected during regular online activities. Here, information on the network of links between users and the characteristics of the information they provide, view, share, and like in the context of their activities on the platform provide information of potentially high value to marketers. This is also information that competing firms might not be able to easily replicate.

Recent literature debates whether there are increasing returns to scale in the volume of this type of information available to a platform. Advances in the technologies required to marshal the vast amounts of data available and the machine-learning algorithms that derive value from those data have potentially important competitive implications. Whereas ten years ago technological limitations may have put firms with relatively modest troves of big data on similar footing to larger companies simply because the technology to extract insight from larger data stores was in its infancy, today that equation may be changing. If a competitor with a larger platform can generate more precise and valuable targeting by virtue of the volume of private data it can leverage, then it could potentially achieve an unassailable advantage over competitors.

Although these economics suggest a few dominant firms would emerge with substantial market power on the advertising side of the business, the implications for consumers on the other side of the business could be more mixed. On one hand, consumers would face fewer large platform companies, raising the issues discussed previously. On the other hand, gaining a larger network relative to competitors could be disproportionately valuable to the platform, and it could be profit-maximizing for a firm to increase network size at the expense of higher compensation to users (e.g., better service, more privacy protections).

Possible Remedies for Privacy-Related Harm

Remedies can take the form of financial compensation for injured parties or structural relief that reduces the ongoing injury. If the value of privacy can be accurately measured, this suggests that it is possible to assess the harm to consumers from anticompetitive practices that reduce consumers’ privacy and remedy those harms through financial compensation. Structural remedies may be more complex because of the potential for unintended negative consequences.

Before remedies can be assessed, the effects of the potential anticompetitive conduct must also be fully assessed. In many cases, the firms that are collecting, sharing, and selling consumer information are also providing goods and services, based on this information, that consumers find useful. From a consumer welfare perspective, any antitrust enforcement actions that enhance consumer welfare with respect to privacy must be balanced against any resulting increases in the price or reductions in the (non-privacy-related) quality of the products provided by the firms.

For example, prohibiting a company from collecting, sharing, or selling some types of personal data might lead to a reduction in the (non-privacy-related) quality of the service provided or an increase in price, since the cost of the service is no longer subsidized by consumers’ personal information. That is, if consumers desire the same services with the same quality for no monetary cost, the “price” in terms of their personal information may be the same under competitive conditions. In some cases, the company may not even be viable without the type of data collection that raises the privacy concerns we consider here. Whether enforcement actions that potentially reduce or eliminate goods or services from the market would be a consumer welfare-enhancing move would need to be determined on a case-by-case basis.

Further, for many of the services that may represent privacy-related antitrust concerns, much of the value to consumers comes about explicitly because the firm that provides the service has captured the largest fraction of potential users. For example, much of the utility a consumer derives from a dominant social networking platform comes about because the consumer’s family, friends, and colleagues are also on the platform. That is, the large market share of the social networking platform that gives rise to antitrust concerns also creates network effects that benefit consumers. Limitations on personal data collection may lead this social networking platform to reduce the quality of its service or increase its (monetary) price. This in turn may drive consumers away from the network, reducing consumer welfare.

In theory new social networks that offer better privacy protection could arise. However, while there are few technological barriers to entry for many of these services, it will be difficult for new firms to arise and compete once consumers have coordinated on an existing network. To the extent that there are any tendencies toward natural monopoly, the traditional response would be to regulate the firm like a utility, but it is not immediately clear how this regulation would be implemented for technology and online companies. Again, which enforcement actions—if any—would enhance consumer welfare would need to be determined on a case-by-case basis.

Overall, the key to privacy-related consumer welfare concerns in any antitrust case will be to determine the value of privacy and personal information to consumers. We cannot tell if people are overpaying in terms of their privacy if we do not know the price they have paid. This is an unsettled area in both law and economics in which we can expect to see rapid developments in the near future.

Garrett has testified as an expert witness and has consulted on a variety of consumer protection and antitrust cases, including cases related to privacy violations and the unauthorized sharing of personal information. Chris Stomberg has testified as an expert witness and consulted on a wide variety of issues, particularly in the life sciences, and has offered testimony in cases involving alleged breaches of personal financial information. 

    Authors