Traditional Divide Between Consumer Protection and Antitrust
Prior to the passage of the Wheeler-Lea Act in 1938, the Federal Trade Commission enforcement policy was that false and misleading statements by firms constituted an unfair method of competition (UMC) in violation of Section 5 of the FTC Act. The FTC’s position, which many courts agreed with, was that false and misleading statements harmed competitors who truthfully advertised their products. For example, in the 1922 case, FTC v. Winsted Hosiery Co., the Supreme Court found that a company that misled consumers into thinking its knit goods were entirely made of wool committed an UMC. The Court held that false and misleading statements harmed competitors because “the business of its trade rivals who marked their goods truthfully was necessarily affected by that practice.”
The Court reversed this trend in FTC v. Raladam Co., which involved false and misleading statements about an obesity cure. The Court stated that one could not simply assume that false and misleading statements impacted competitors, but rather “the unfair methods must be such as injuriously affect or tend thus to affect the business of these competitors.” The Court emphasized that “[u]nfair trade methods [such as false advertising] are not per se unfair methods of competition,” stating that “[s]omething more substantial than [conjecture] is required as a basis for the exercise of the authority of the Commission.”
In 1935, Senator Wheeler proposed a bill to amend the FTC Act through the Senate Committee on Interstate Commerce. The Committee Report noted that, under the Court’s decision in Raladam, there would be no violation of the FTC Act if “all competitors in the industry practiced the same unfair methods” because in such a world none would be injured by the deceptive practice. In 1938, the Wheeler-Lea Act was passed, conferring upon the FTC the power to go after companies that practiced “unfair or deceptive acts or practices” (UDAP).
Since 1938, the FTC has largely treated UDAP conduct in a manner quite different from UMC conduct. The two are enforced by different bureaus of the FTC and have different legal standards of proof. UDAP requires harm to individuals, even if there is no harm to the competitive process. Specifically, the UDAP provision provides that an act or practice is unlawful if it (1) “causes or is likely to cause substantial injury to consumers,” (2) “which is not reasonably avoidable by consumers themselves,” and (3) “not outweighed by countervailing benefits to consumers or to competition.” There is no requirement that the act in question harm consumer welfare. Such a requirement would prevent the FTC from reaching unfair or deceptive practices by firms without monopoly power. In contrast, UMC generally requires harm (or likely harm) to the competitive process. The FTC’s 2015 Statement of Enforcement Principles Regarding “Unfair Methods of Competition” Under Section 5 of the FTC Act makes clear that, in determining whether a violation occurred, “the Commission will be guided by the public policy underlying the antitrust laws, namely the promotion of consumer welfare.”
While the FTC has considered privacy in UMC cases, it has not yet brought a case directly alleging privacy or data security as a competitive harm. Rather, in cases in which the FTC has identified potential consumer privacy concerns, such as the Facebook/WhatsApp and Radioshack merger reviews, the FTC addressed them under UDAP through its Bureau of Consumer Protection.
The Push for Convergence
In recent years, a number of regulators, policymakers, and commentators have argued that privacy can raise antitrust issues, setting forth a number of theories of harm. The first theory, thinking of privacy as a dimension of “quality,” may be the most acceptable way of incorporating privacy into traditional antitrust jurisprudence. This is so because the antitrust laws clearly protect product quality as much as product price. Mark MacCarthy explains: “When differences in these privacy practices [such as providing clearer policy descriptions or allowing greater choice about data use] are valuable for consumers and a basis for choice among competing products or services, they are a dimension, aspect or parameter of competition.” Last year, Assistant Attorney General Makan Delrahim expressed the view that “privacy can be an important dimension of quality” and “non-price factors like innovation and quality are especially important in zero-price markets.”
Given the view that privacy is an important dimension of quality, some argue that firms have obtained or maintained dominant positions by misrepresenting the degree to which they will protect users’ data. For example, Dina Srinivasan argues that Facebook’s “promises of privacy were the deciding factors that tipped the early market in Facebook’s favor, away from MySpace.” She goes on to argue that Facebook’s allegedly false and misleading statements about privacy deceived consumers, “result[ing] in precisely the type of harm that antitrust law concerns itself with—the exit of rivals and the subsequent extraction of monopoly rents in contravention to consumer welfare.” Such “harm is not speculative, it is complete.” Specifically, once competitors had been driven out of the market, “Facebook’s monopoly was complete due to the exit of competition combined with the protection of the barrier to entry that results from a product with over a billion users on a closed communications network.” Facebook then allegedly “leveraged its market power in a consolidated market to successfully degrade privacy to levels unsustainable in the earlier competitive market when market participants were subject to consumer privacy demands.”
Srinivasan further argues that a company’s (specifically, Facebook’s) ability to misrepresent or circumvent prior privacy protections demonstrates an “inelasticity of demand” for their product, and may constitute a “direct showing of monopoly power under Section 2 of the Sherman Act.” Consumers have no alternative choices due to the exit of other competitors, significantly high barriers to entry due to direct network effects, and high switching costs (particularly in closed communication systems like Facebook).
Having argued that it is important for platforms to protect user privacy, some argue that certain dominant platforms use privacy laws as pretextual justifications to deny rivals access to data. For example, the UK Competition and Markets Authority recently stated in its final report on online platforms and digital advertising:
We have also heard concerns that large platforms use data protection regulations such as the General Data Protection Regulation (GDPR) as a justification for restricting access to valuable data for third parties, while retaining it for use within their ecosystems, thereby consolidating their data advantage and entrenching their market power.
Privacy and the Market for Lemons
In his seminal 1970 paper, Nobel Laureate George Akerlof developed the now well-established theory of the “market for lemons.” Akerlof used a stylized used-car market to illustrate how a market might fail when average buyers have so little information about the quality of a particular used car that they must assume its quality is average. This is called the “market for lemons” because used cars that broke down were called lemons. The first known usage of the term was a VW ad from 1960 that said, “We pluck the lemons; you get the plums.”
In his article, Akerlof argued that sellers with above-average used cars withdraw from the used-car market because they cannot earn higher profits from their higher costs, thus lowering the average quality of the remaining sellers. Buyers update their expectations to the new lower average, and the process repeats until only the lowest-quality used cars remain. After showcasing the market-failure consequence of asymmetric information, Akerlof concludes that in markets in which quality is difficult to gauge, “dishonest dealings tend to drive honest dealings out of the market.” After publication of the article, certain “lemon laws” were passed allowing customers to return cars that break within a grace period after purchase.
For our purposes, we assume users care enough about privacy to alter their behavior. In such a world, one would still expect to observe privacy violations even in the most competitive markets. Because users may not be able to readily observe some of these quality differences between platforms, information asymmetries can prevent even competitive markets from disciplining privacy violations. It is for this reason that one sees violations of customer expectations in markets with absolutely no barriers to entry, such as used cars, retail brokerage, or counterfeit goods—all goods where quality cannot be directly observed.
The idea of privacy as a market for lemons was put forth by Tony Vila, Rachel Greenstadt, and David Molnar of Harvard University in a chapter of a book, Economics of Information Security. They noted:
Recent survey data indicated that 92% of consumers are concerned about the misuse of their personal information online . . . and privacy concerns are the number one reason why individuals choose to stay off the Internet. . . . If we believe that people value privacy, why is there not an efficient market for it? . . . [W]e can think of a consumer choosing among websites that may respect her privacy (“Respecting” sites) or may not (“Defecting”) with no way to determine beforehand which is which. Then privacy in web sites looks like the lemons market.
In both the case of used cars and privacy, providers both with and without monopoly power are able to make material misrepresentations without consumers detecting the violation at the time of purchase. And in the case of privacy violations, consumers may never determine whether a violation occurred. As a result, privacy violations occur even in perfectly competitive markets, and the existence of a violation says nothing about whether a firm has monopoly power.
This is evidenced by the fact that the FTC has sanctioned companies both large and small for misrepresentations involving privacy, including companies that are highly unlikely to possess monopoly power. For example, the FTC recently settled with Unroll.me, an e-mail management company with numerous competitors for misrepresenting how it accessed and used consumers’ personal e-mails. Unroll.me was called “a tiny player in the personal data market” even before it faced a backlash for its data practices. Similarly, in 2016, the FTC settled with Very Incognito Technologies (Vipvape), a small manufacturer of vaporizers in a market occupied by hundreds of players, for its misrepresentations about participation in a privacy program. In the last year, the FTC brought an enforcement action against Retina-X Studios, a monitoring software company with a range of competitive substitutes, for misrepresentations about its data security.
The Importance of Pareto Efficient Gains from Trade
While privacy clearly matters to many people, it does not matter the same to all people. For this reason, it is not clear that the use of data in a way that seems inconsistent with a given user’s preference means that product quality is low. For example, FTC Commissioner Noah Phillips explained: “We have to look at ‘privacy-as-quality’ carefully. Given that consumers have different privacy tastes, where the value of an aspect of competition lends itself to polar disagreement—your privacy cost is my privacy benefit—identifying a lessening of competition for privacy may be difficult.”
In addition, “Privacy concerns can vary dramatically for the same individual, and for societies, over time,” although studies show that the majority of users do not read privacy policies, and there is some evidence of a “privacy paradox” in social media use in which the people who are most concerned about privacy still do not engage in privacy-protective measures.
There are dangers to relying on survey data (“stated preference”) as opposed to the actual trade-offs made by consumers (“revealed preference”). For example, survey data show that consumers care about privacy, yet revealed preferences suggest their stated concerns may be exaggerated. Moreover, at least one study indicates that Americans’ primary concern about privacy relates to identity theft. “Addressing [identity theft] is different from (and sometimes in conflict with), say, control over data, or embarrassment, or other dignitary harms.”
Yet even if true, a firm’s use of consumer data that is inconsistent with consumers’ privacy expectations does not necessarily signal that a firm has monopoly power. There are numerous dimensions of quality that consumers use to judge a product. For example, just as a customer might purposely choose a restaurant that offers worse food but better ambience, a user may choose an online platform that has less than desirable data practices but offers superior features—like the ability to share pictures with a user’s 1000 friends simultaneously.
Indeed, the fact that ad-supported models sit side by side with pay models suggests that consumers not only have different preferences but also that they are willing to make certain trade-offs between value and privacy. For example, a consumer who feels that ads are invasive might be willing to pay Spotify $9.99 a month to avoid them, while a consumer who values $9.99 a month more than an ad-free environment is willing to watch targeted ads. These sorts of gains from trade-offs are the very essence of our market-based economy.
Furthermore, advocates for the ad-supported business model might argue that the gains from trade may be quite significant. According to third-party reports, Google’s average revenue per user (ARPU) is approximately $256 dollars per year. But according to a study by an MIT economist, consumers value search at over $17,000 per year. Even if data is thought to be a value extracted from users, one might argue that paying $256 a year to get something worth over $17,000 is a relatively good bargain.
Misrepresentation as an Exclusionary Act
The fact that privacy resembles the market for lemons does not mean that firms should be permitted to make misrepresentations to consumers without consequence.
The contention that privacy misrepresentations can result in the unlawful acquisition or maintenance of monopoly power is essentially an argument that misrepresentation or fraud can serve as an exclusionary act. Even assuming that courts were to adopt such a novel theory, under existing case law, the deceptive conduct must still have an anticompetitive effect and anticompetitive intent.
In United States v. Microsoft Corp., the D.C. Circuit held that Microsoft violated Section 2 of the Sherman Act because it “deceived Java developers regarding the Windows-specific nature of [its] tools” and that conduct “served to protect its monopoly of the operating system.” The court explained that “developers who relied upon Microsoft’s public commitment to cooperate with Sun and who used Microsoft’s tools to develop what Microsoft led them to believe were cross-platform applications ended up producing applications that would run only on the Windows operating system.” Internal Microsoft documents confirmed its intent to deceive Java developers and indicated “that Microsoft’s ultimate objective was to thwart Java’s threat to Microsoft’s monopoly in the market for operating systems.”
In a subsequent decision, Rambus v. FTC, the D.C. Circuit held that a patent holder’s assertion of a patent after its intentional failure to disclose its existence does not violate the antitrust laws without evidence that, but for the failure to disclose, the standards-development organization (SDO) would have incorporated a different technology into the standard. In other words, the deception must be the but-for cause of the exclusion.
Similarly, courts have held that a patent holder’s intentionally false commitment to license on fair, reasonable, and nondiscriminatory (FRAND) terms made to induce a SDO to select its patented technology over other alternatives can be a violation of Section 2 of the Sherman Act. The claim requires proof that, but for the fraudulent commitment, the SDO would have either adopted an alternative technology or not specified any technology at all. As the court in Microsoft Mobile, Inc. v. InterDigital, Inc. explained: “Since this claim sounds in fraud, it must meet the Rule 9(b) pleading standards.” Rule 9(b) requires: “In alleging fraud or mistake, a party must state with particularity the circumstances constituting fraud or mistake. Malice, intent, knowledge, and other conditions of a person’s mind may be alleged generally.”
These decisions set forth an extraordinarily high bar. In the context of privacy misrepresentations, it would require a plaintiff to prove that firms made intentionally false statements as to future intent, that consumers incorrectly believed that their data would be protected, and that, had they known that it would not, that a sufficient number of users would have chosen a different product or service so the market would not have tipped in the defendant’s favor.
Data as a Barrier to Entry
The argument that privacy violations allow firms to obtain monopoly power turns on the notion that data is a barrier to entry. According to Inge Graef:
In the online environment, personal information has become [an essential facility,] a raw material or necessary input for companies that are employing business models dependent on the acquisition and monetisation of data. . . . By keeping the vast quantities of collected data to themselves, incumbent providers . . . are able to foreclose competition from new entrants and companies that would like to develop complementary services but do not have access to the required information.
The basic argument is that data is the “new oil.” “The world’s most valuable resource is no longer oil, but data” and large quantities of “data changes the nature of competition,” acting as “protective moats” and “providing barriers” to entry.
Scholars like Catherine Tucker argue that data is not a traditional barrier to entry. She argues that consumption of data is not rivalrous. Two firms can have the same data and consumption by one does not lead to less data for the other. Similarly, Anja Lambrecht and Tucker use a classic framework called the “resource-based view of the firm,” which states that, for big data to provide a competitive advantage, it has to be inimitable, rare, valuable, and non-substitutable. Their analysis
suggests that big data is not inimitable or rare, that substitutes exist, and that by itself big data is unlikely to be valuable. There are many alternative sources of data available to firms, reflecting the extent to which customers leave multiple digital footprints on the internet. In order to extract value from big data, firms need to have the right managerial toolkit.
In addition, as Nils-Peter Schepp and Achim Wambach explain, “[P]otential competitors do not necessarily have to build a dataset equivalent to the size of the incumbent. . . . They rather need to find ways to accumulate highly relevant data to build a competitive, not necessarily the same dataset.” “[T]he origin of many innovative start-ups illustrates that companies with smaller but possibly more specialized datasets and analytical expertise may be able to challenge established companies.” It is also important to keep in mind that “[e]ntering the market and then collecting and analyzing user data is not a theoretical approach but rather the very model followed by many of the leading online firms when they were startups or virtual unknowns, including Google, Facebook, Yelp, Amazon, eBay, Pinterest, and Twitter.” “Other, recent examples . . . include Pinterest, SnapChat, Etsy, and TikTok, all of which started without a user base or data, yet became successful despite facing incumbents with much larger user and data footprints (e.g., Facebook, YouTube, Google, Amazon etc.).”
Unintended Consequences of Regulation
The analogy to the market for lemons suggests that the answer to privacy concerns is more regulation. And certainly there should be penalties for misrepresentations of stated privacy policies much in the same way that there are penalties for misrepresentations of certain unobserved qualities—for example, that paint is lead-free or that a bagel is gluten-free.
But there are some real questions as to whether privacy-specific regulation will have beneficial effects. Following the enactment of privacy regulations such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), along with users’ increased privacy expectations following events such as the Cambridge Analytica data scandal, companies have made a number of changes to their privacy policies. These include decisions not to share certain user data or to share user data in limited ways (e.g., only in the aggregate).
Passed in 2002 and amended in 2009, the European Union’s ePrivacy Directive requires websites to receive users’ consent before using any non-strictly necessary cookies (and allow subsequent withdrawal); provide accurate and specific information about tracked data before receiving user consent; and allow users to access the websites’ services even if consent is not provided. Two months after the GDPR became enforceable (May 2018), the number of third-party cookies used per webpage among 200 news publishers in the European Union declined by 22 percent. Significantly, the GDPR classifies cookies that can identify an individual via her device as personal data, and thus companies must allow consumers to withdraw consent as easily as they offer it, and to make it possible to reject certain cookies. The ePrivacy Regulation will build upon and tighten existing regulations of the existence, transparency, and disclosure of third-party cookies.
While some have argued that some companies have used the GDPR and CCPA as a pretextual excuse to harm their rivals in digital advertising, a problem with this argument is that some of the first companies to block third-party cookies were those without digital advertising businesses. For example, as of March 24 2020, Apple blocked all third-party cookies by default for all of its users, which was soon followed by promises from Mozilla’s Firefox.
Further, it may be challenging to determine whether compliance concerns were genuine or pretextual without peeling away the attorney-client privilege. While courts have not explicitly addressed the issue of privacy, by analogy, the First Circuit recently noted: “Several circuits have identified a defense to antitrust liability where the defendant’s action was taken as part of a good faith, reasonable attempt to comply with a regulatory scheme.”
Conclusion
Regardless of which side of the debate one finds more persuasive, it is clear that many consumers care about how their data is used and many firms have attempted to address these concerns through privacy policies and adherence to applicable law. A strong argument can be made that consumers should be able to hold firms accountable if those policies are breached, but an equally strong argument can be made that the antitrust laws are not an appropriate vehicle for that social good. Whichever law is used, care should be taken to make sure that it does not result in unintended consequences as some have alleged the GDPR has had.