chevron-down Created with Sketch Beta.

The Antitrust Source

The Antitrust Source | August 2023

Generative AI and Guidance on Abusiveness May Illuminate a New Focus on “Dark Patterns” for Enforcement and Related Consumer Research

Andrew Stivers

Summary

  • Consumer Protection enforcement authorities are showing a new interest in “dark patterns” which they describe as manipulative design practices that subvert consumer choice, particularly in digital settings.
  • This interest likely stems from changes in information technology and the introduction of generative AI that change how markets are designed and how firms interact with consumers.
  • These marketing changes have the capacity to amplify negative effects of marketing practices that have not traditionally been deemed violative but fall in the definition of “dark patterns. Consumer Protection authorities appear to be interested in using deception, unfairness and abusiveness frameworks in this context.
  • While the use of the dark patterns label may implicate a much wider, and more subtle, range of practices, the toolset to assess their effect on consumers, including surveys and market outcome analysis remain the same.
Generative AI and Guidance on Abusiveness May Illuminate a New Focus on “Dark Patterns” for Enforcement and Related Consumer Research
Flavio Coelho via Getty Images

Jump to:

Consumer protection agencies have been signaling a new focus on “dark patterns,” representatively defined by the United States Federal Trade Commission as “design practices that trick or manipulate users into making choices they would not otherwise have made and that may cause harm.” The FTC has begun to label some allegedly deceptive or unfair practices as dark patterns in recent cases and rulemakings. For example, the Commission argued that cancelation procedures that included attempts by the firm to “save” the sale, or that are not prominently featured, are both dark patterns and violations of Section 5 of the FTC Act. In their joint concurring statement in support of new rules on the use of negative options, the FTC Chair and Democratic Commissioners condemned “using ever more sophisticated dark patterns to thwart consumer efforts to cancel a product or service.” The FTC also filed a complaint against Amazon in June of this year, alleging the use of dark patterns to induce “nonconsensual” sign-up, and stymie cancelation, of its popular subscription service Amazon Prime.

While nothing in the above-referenced activity necessarily raises new interpretations of the FTC’s deception and unfairness authorities, the Commission’s report outlining its staff views on dark patterns, Bringing Dark Patterns to Light (the “Report”), emphasizes that a very wide range of design practices inherent in how products are marketed to consumers could be deemed to violate consumer protection rules. The Report suggests a more general interest in “manipulative” practices that are not clearly tied to current interpretations of deception or unfairness. In addition to the definition provided above, the Report refers to dark patterns as “design tricks and psychological tactics [omitted examples] to get consumers to part with their money or data.” This arguably could be applied to all marketing practices, including strategic design of a product’s attributes (e.g., eye-catching, if functionally irrelevant, olive oil bottle shapes that may serve to mask differences in size), sales pitch or presentation (e.g., candy in a rack by the checkout line that may induce impulse purchases by tiring parents), and pricing (e.g., prices ending in “.99” that may induce an outsized demand relative to the fractionally higher rounded price).

From a consumer awareness standpoint, flagging the fact that transactions can be designed to capture more of the gains from trade for the seller can be valuable. Savvy consumers should approach purchases with a mindset to detect and counter such efforts. However, there has been little specific guidance on enforcement standards by the FTC as to whether “dark patterns” indicate a truly new and more aggressive area of violative practices in consumer protection. As posed by an earlier article on dark patterns: “When does the use of advertising to “manipulate” consumers become an unlawful dark pattern?”

The most immediately practical answer is, of course, when a practice labeled as a dark pattern is deceptive or unfair as violative under current statute, policy statement, or past law enforcement practice. From that perspective, the use of the dark pattern epithet is simply a marketing practice used by consumer protection agencies. However, practitioners may also want to know where consumer protection agencies may want to go with dark patterns. That is, where might those agencies attempt to push future statutes, policy statements, and law enforcement practice, and why? This article examines this question through two trends that are changing some of the fundamental economics, and thus practice, of marketing.

First, changes in information technology are driving changes in the organization of the retail marketplace, most notably in the shift to online retailing and advertising. In that context, consumer protection agencies might believe that practices that had been previously deemed to be merely “sharp”—business practices that are bad for consumers but not either harmful or unavoidable enough to be illegal—should be recategorized as illegal if markets are now less competitive, as some regulators and advocates seem to think. Second, and more recently, the rise of interactive Artificial Intelligence (AI) tools like ChatGPT will likely accelerate the use of mass individualized marketing. As defined by Google, “Artificial Intelligence is a field of science concerned with building computers and machines that can reason, learn, and act in such a way that would normally require human intelligence or that involves data whose scale exceeds what humans can analyze.” As discussed below, this means that AI can be used to automate at feasible cost the process of one-to-one bargaining over all aspects of a transaction, including material attributes of the product, presentation of that product, and pricing.

Background on the Report

The Report defines and categorizes the types of dark patterns seen most commonly in ­e-commerce. Examples in the Report’s introduction include, “pre-checked boxes, hard-to-find-and-read disclosures, and confusing cancellation policies.” Other examples given include hidden fees, pre-populated shopping carts, and unnecessarily complicated cancelation processes.

The body of the Report is organized into four sections. Two of these sections—“Design Elements that Induce False Beliefs” and “Design Elements that Hide or Delay Disclosure of Material Information”—focus on “deception through design,” where the practice would be intended to manipulate consumer interpretation or understanding of content. These sections lay out practices that have been long understood to be potentially deceptive—hidden fees, burying key disclosures, false reviews, and practices intended to induce unwarranted credulity. They also include examples of practices that are made easier to implement by the digital context that is deceptive in itself, including fake countdown timers, fake limited supply notices, fake popularity flags, and fake reviews.

The third section is titled “Design Elements that Lead to Unauthorized Charges.” In the context of FTC enforcement, these design practices may be characterized as “unfairness through design (online),” where there is some practice that does not implicate consumer beliefs (except by omission) and meets the other requirements of unfairness—substantially harmful, unavoidable, not outweighed by benefits. The FTC in fact highlights a variety of potentially unfair practices in this section, including an in-app purchase process that does not affirmatively require the account holder to consent and obstructive cancellation processes. In general, this section flags practices that “subvert [user] autonomy or impair decision-making.” The Report does not cleanly separate out deception from unfairness (in fairness, because they cannot be cleanly separated out), as in addition to unfair practices it also discusses deception in the context of “free trials.” While the Report may intend to flag the practice of offering free trials itself as a “dark pattern,” that may be unfair regardless of any overt deceptive statements, under the idea that customers may be more likely to fail to cancel continuation at the end of a trial period, its examples are of deception about what the free trial really is.

Finally, the fourth section is subject matter specific. Titled “Design Elements that Obscure or Subvert Privacy Choices,” the FTC leans into presumptions that privacy is per se material and that consumers are better off defaulting to more restrictive choices. The discussion lays out ways that design: (a) may not put privacy choices front and center (e.g., failing to provide choice, or providing choice with incomplete, confusing, or buried choice dashboards); (b) may hassle consumers if they do not make the firm’s preferred choice; or (c) may “nudge” consumers to share information through pre-selected checkboxes or the like.

The Report generally emphasizes that presentation, context, and design matter for FTC inquiries. This is not a new stance for the FTC, which noted in its 1983 policy statement on deception that the “entire advertisement, transaction or course of dealing will be considered.” The FTC’s recent complaint against Amazon Prime provides an illustration of how that longstanding policy maps onto a new dark patterns’ nomenclature. The complaint alleges six design practices within those processes that it labels dark patterns: “forced action,” “interface interference,” “obstruction,” “misdirection,” “sneaking,” and “confirmshaming.” The complaint also provides granular detail (roughly 50 pages of the complaint) on the entire design of sign up and cancellation processes for Prime. Notably, however, the Commission still frames its complaint in terms of violations of Section 5 unfairness and violations of the Restore Online Shoppers Confidence Act (ROSCA).

The question is, what does the use of the dark patterns framework do, and why elaborate on this issue now? What does that imply for future enforcement?

Why Dark Patterns Now?

At least in part, the Commission is itself likely to be engaged in marketing. “Dark patterns” is a simple and evocative name for a complex, and amorphous, set of contested marketing practices that has relatively recently been adopted by advocates and scholars in the field of digital markets. Its use may be intended as a signal that the Commission is listening to those advocates. However, the underlying trends in those digital markets—intensive use of consumer data and integration of generative AI into marketing campaigns—may also motivate concrete attempts to change consumer protection enforcement.

Information Technology is Changing how Markets are Designed

As a foundational matter, the new emphasis on “patterns,” or design, is a function of the plasticity of digital marketplaces relative to their brick-and-mortar analogues. Virtual markets allow more, and cheaper, ways to present and price products, as noted by the Report. This increased salience of design practices is also shown by the inclusion of a specific prohibition against dark pattens in the European Union’s new regulation for online commerical activity, the Digital Services Act. That flexibility allows online firms to innovate quickly to meet consumer needs and increase the value of their offerings. It may also allow some firms to experiment more cheaply with aggressive, strategically sophisticated designs that increase profits at the expense of consumers. The ambiguity of which dark patterns will be deemed illegal could mean that even attempts by scrupulous sellers to optimize their marketplaces while steering clear of deceptive or unfair practices through the testing and monitoring of consumer outcomes could trigger closer investigation. Even more worrisome for experimentation, there have been cases where imperfect attempts to monitor potentially harmful outcomes during the experimentation process have been used by the FTC to justify imposing liability for alleged failures.

In the context of these digital markets, a significant focus by policymakers has been the collection and use of consumer data. Consumer behavioral data has been a driver of the growth in digital markets. Such data allow companies to design and target marketing to audiences that would be most likely to respond to it, with a variety of benefits to both firms and consumers. The collection and use of such data also creates risks for consumers, both in terms of its availability for exfiltration and use by criminals, and for use by legitimate market actors in ways that weaken consumers’ bargaining power. In this way, the increased importance of consumer data has amplified the FTC’s concerns about dark patterns. For example, the Commission, and some scholars, have noted that companies could choose to use consumer data to identify audiences susceptible to manipulation and design campaigns to exploit that susceptibility. Narrowly targeting that population then could theoretically both increase click-through rates in some harmful way and minimize detection risk from employing potentially deceptive, unfair, or socially disapproved practices.

AI is Changing how Firms Market to Individuals

Most recently, advances in machine learning and generative AI may signal to market regulators a qualitative and quantitative shift in the technology of “manipulation” that could further increase the range and magnitude of possible harms and raise the costs of detection. The use of generative AI in individualized marketing mirrors, in some ways, how retail market interactions worked pre-mass-marketing and pre-industrialization. In that era, each transaction would often be individualized, with its own price, pitch, and product characteristics, dependent on personal relationships (i.e., consumer data) between buyer and seller. Bargaining was often a given, but prices and other attributes were also typically governed by both strict market regulations and less formal, but community enforced, social norms. By contrast, for most adults in the United States today, mass-markets with uniform (at least locally) presentation, product attributes, and posted prices have been the norm. The pre-industrial prominence of price controls and strictly enforced social norms has been largely replaced by antitrust and consumer protection regulation which focus on preserving the broad market conditions necessary for fair, functional markets. In terms of social norms about disapproved, if still legal marketing practices, relatively uniform marketing practices can potentially be policed by informed, expert consumers and reviews. This modern marketing environment necessarily informs consumer expectations and strategies for navigating purchases.

The possibility of truly mass individualized marketing that makes every transaction potentially unique and adaptable may confound those expectations or raise significant new issues for enforcement agencies. On an administrative level, detecting violations and establishing likelihood of harm is likely to be more difficult. While the FTC has some tools for prosecuting the use of faulty algorithms—for example, in some contexts, the Fair Credit Reporting Act imposes performance standards on screening algorithms—the dynamism and endogenous variability of AI are likely to create difficulties similar to establishing systemic violative practices in, for example, a staff of car salespeople. Screening algorithms in credit, employment, or tenant screening have been complex, but still relatively static, uniformly applied sets of instructions. Intelligent agents with natural, or artificial, agency do not necessarily have or need such documentation, nor such uniformity in applying standards.

Beyond administrative issues and individualization, substantively the FTC may be concerned about new practices that AI could enable. For example, does AI masquerading as human—the “chat” AI that have sparked much of the current discussion are specifically designed to pass as human—raise concerns about misplaced expectations of trust that might require disclosures in marketing contexts?

The answer is unclear, although consumer protection authorities have already raised concerns. While current expectations are changing, under exposure to such technologies, research has found that consumers are more likely to trust automated systems with their personal data than humans. More generally, many people appear to somewhat blindly trust new technological solutions. On the other side, research has also found that the social aspects of a sales interaction can affect purchase intention. Trust, in a social, including market, interaction is understood to be based on social and moral foundations, but such foundations need not be present in AI, which suggests both that the perceived humanity of an AI could be used to influence consumers, and that AI could be directed to exploit unearned trust. As with traditional marketing practices, the effects on consumers would need to be measured and assessed in context to understand whether consumers were being harmed in some way.

Second, in that vein, the machine learning that underlies AI could unearth and bring into play previously unexploited dimensions of a transaction. Contracts are practically, and optimally, incomplete, meaning that to complete transactions, we rely on an undefined set of assumptions over at least some aspects of the good or service at issue. That is, consumers are typically unaware not just of the quality of most of the technical attributes that defines a complex modern consumer product but are also unaware of their existence at the time of purchase. Some of these will be immaterial to consumers, and merely represent a neutral efficiency from which the seller can profit. Others could be material if revealed but are not independently sought out by many consumers as salient. For these latent characteristics, standard assumptions about competitive pressure may not hold. AI may then discover and strategically bring into play variables that consumers do not realize are subject to manipulation. That is, there may be strategic bargaining chips that consumers do not realize are on the table, and might not agree to negotiate over, if they did. More prosaically, AI has no shame, nor would it tire in pursuing profit opportunities. For example, AI may be more likely to impose an unrelenting stream of “would you like fries with that” upselling, at least where it believed there to be enough chance of success to offset the annoyance.

This last example again highlights the problem with regulating “manipulative” design. More aggressive marketing may be annoying for some consumers, and they may be willing to pay to avoid it—i.e., they are worse off being subject to it. But other consumers will positively respond, and be happier for it, and the company could be responding to data that shows that consumers regret failing to purchase the add-on. For example, reminders or suggestions to purchase extra batteries or charging accessories could be valuable to new phone purchasers. Again, measuring and analyzing consumer outcomes for the specific practices are likely the only way to know.

These ambiguities, although parallel to those found in traditional standards and cases for deception and unfairness, appear closer to the line. While they provide some insight into the areas and practices in which the agencies may scrutinize, they do not provide any argument or path for how more expansive and aggressive applications of consumer protection rules might proceed.

The Likely Role of “Abusiveness” in Addressing Dark Patterns

There are two questions to address in considering the potential path that consumer protection agencies may take. First, what kinds of practices might the FTC view as concerning enough to want to address, if it had the right tools? Second, what would the right tools look like? The discussion on AI and consumer data suggests that practices that could be targeted may lie in the grey area of sharp practices that may have traditionally been viewed as negative but had not risen to the level of deceptive or unfair as defined by current legal standards and policy. For example, high pressure sales pitches are typically not considered illegal unless used to further some other deceptive, or unfair, practices. Another example of a common commercial practice that has become more contested and is related to the shift to digital platforms is self-preferencing—promoting a product or service because it serves the seller’s interest independent, or in opposition to, the buyer’s interest. More generally, there have been questions raised about a seller’s elevation of their own interests versus the interests of the buyer.

In the American marketplace, which at least conditionally accepts the role of self-interest in motivating market actors, the question of which dark patterns might be illegal could be posed as: which self-interested design practices are harmful “manipulation,” (“to change by artful or unfair means so as to serve one’s purpose”) versus those that are merely self-interested “curation” (“the act or process of selecting and organizing for distribution”)? Self-interested strategic design is generally presumed in market interaction because market actors are presumed to look out for their own interests, within some acceptable bounds. The outer bounds of that acceptability are determined by current law enforcement and regulation, but there is also the grey area of social norms that have typically been enforced through competition and reputation.

As noted by legal scholar and former head of centralized federal regulatory review Cass Sunstein in a paper on manipulative marketing practices, “[w]e might also think that manipulation falls in a category of actions properly promoted or discouraged by social norms, but properly unaccompanied by law or regulation.” As with any other social interaction, the bounds of acceptable practices are judged and driven by social norms. Norms are informal, but are expected to be followed, generally come with some consequence when not followed and exist to help people navigate their interactions, including in the marketplace.

Policing these norms through formal regulations is difficult, and has typically been left to competitive and reputational pressures because of at least two major concerns:

One reason, of course, is vagueness. Many norms are directed against conduct that is not and cannot be defined with sufficient precision for law. This problem raises serious concerns: people will not have fair notice about what they can and cannot do, and enforcement authorities will have undue discretion to pick and choose. In ordinary language, and even as elaborated by the most careful philosophers, manipulation cannot easily be the foundation for criminal law and regulation. Another reason is overbreadth. Many categories of bad conduct include a large assortment of actions, some of which are not so bad as to warrant punishment, and some of which might not be bad at all in the circumstances.

The same issues arise in the FTC’s core authority of policing against deceptive claims in language. The flexibility of language inveighs against regulators ability to formally define what can and cannot be said in the marketplace. However, if that language use is deemed to be harmful enough, the potentially significant costs involved in articulating and demarcating the violation can become worthwhile. Similarly, if policymakers believe that the competitive safeguards against manipulative design have weakened and the potential harm stemming from such design has grown, that may justify, at least in the view of consumer protection enforcers, bringing complaints against practices that had previously been out of effective reach of deception or unfair legal standards.

The complaint against Amazon provides a glimpse into how this might work by articulating six dark practices that it alleges contribute to violations of ROSCA or the FTC Act.

Three of these practices appear to map directly to traditional understanding of deception or deceptive omission. The most obvious of these is “sneaking,” which the Commission defines in the complaint as “hiding or disguising relevant information, or delaying its disclosure.” The second, “interface interference”—“a design element that manipulates the user interface in ways that privilege certain specific information relative to other information”—appears to be the negative-space twin of sneaking, with an explicit reference to malleable digital user interfaces. The third dark pattern in this group—“misdirection”—appears to highlight the ability to combine a disclosure (or nondisclosure) with the option of acting on how information is disclosed. All three of these seems to refer to design practices that subvert the FTC’s guidance on providing “clear and conspicuous” disclosure of material terms, which the FTC has long articulated as deceptive.

The other three practices that the complaint labels as dark patterns may signal more clearly an effort to expand the use of FTC authorities to cover what had been considered sharp practices. These may be linked to pre-digital sharp practices and norms that the FTC has chosen to pursue in their digital form. The complaint defines “obstruction” as “a design element that involves intentionally complicating a process through unnecessary steps to dissuade consumers from an action.” As noted above, the effects of complicated processes are cheaper to test and implement in the digital space, but retail selling has long used them. “Forced Action” is defined in the complaint as “a design element that requires users to perform a certain action to complete a process or to access certain functionality.” This is the digital version of “would you like fries with that?” upselling that consumers have been subjected to since far before the shift to digital markets. Finally the complaint invokes “confirmshaming” as “a design element that uses emotive wording around the disfavored option to guilt users into selecting the favored option.” While digital versions of this practice (using different emotional levers) are typically more subtle than the trope of salesmen framing less profitable choices to men as less masculine to direct their choice, the intent is the same.

These latter practices in particular may get more attention from enforcers in a digital context because when implemented in software the practice and the effect is cheaper to detect than when done by individual salespeople. However, these final two practices at least still differ significantly from practices that have traditionally been found to be unfair or deceptive. Sellers have been generally given wide latitude to design and implement their processes as annoyingly as they like, with the presumption that competition will reward those offering preferred—or perhaps less bad—processes and experiences. While the costs of detection and enforcement may be lower when these practices are online, it is not clear that the harms to consumers are higher. For example, the FTC states in its complaint that Amazon’s cancelation could be a “Four-Page, Six-Click, Fifteen-Option” process. For online consumers used to much instant gratification, this may seem difficult, but a consumer more used to the time and cost of canceling a service pre-internet—either navigating the post or placing a potentially expensive long-distance call—such a process is likely far cheaper.

The degree of harm or risk of harm associated with a dark pattern may still fall short of what is typically understood to support a legal claim of unfair or deceptive practices. In terms of a standard for deception and unfairness often invoked by consumer protection economists, the harm may fall short of making people worse off than if they had not entered into the transaction. That is, the contested practices could involve a reduction in benefit from that purchase relative to some less manipulative set of practices, without having affected their decision to purchase the product, or benefit more from that purchase than available alternatives.

That looser standard for harm may align more closely with a consumer protection concept that the FTC does not currently have, but is wielded by its sister agency CFPB, that is, abusiveness. The CFPB issued a policy standard on its abusiveness authority early in 2023 that summarized abusiveness as:

(1) obscuring important features of a product or service, or (2) leveraging certain circumstances to take an unreasonable advantage. The circumstances that Congress set forth, stated generally, concern gaps in understanding, unequal bargaining power, and consumer reliance.

The use of the phrase “unreasonable advantage” suggests that practices that benefit consumers, but not as much as they “reasonably” might expect, could be abusive. The examples provided in the summary of the policy statement—“gaps in understanding, unequal bargaining power, and consumer reliance”—would be unfair if consumers were significantly harmed by the practices (unavoidably, with no countervailing benefits). For example, if a firm omitted crucial information that would overturn reasonable assumptions by the consumer (e.g., linens are undisclosed extra charges for a hotel room) such that the consumer would not have purchased the room but for the omission, then the practice likely meets the unfairness criteria. Similarly, a company that represented, or knew that it relied on, incorrect presumptions of aligned interest with consumers to induce transactions that would otherwise not have occurred could also be engaged in an unfair practice.

This suggests that for abusiveness to add any value to the enforcement tool set, it must bring an attenuated standard for harm, including, seemingly, the case where consumers simply could have done better, absent some set of contested practices. To the extent that consumers have expectations that explicit or implicit social norms of market practices would be followed, abusiveness as articulated by this policy statement could be a tool used against practices that are deemed dark patterns, but do not meet current standards for unfairness or deception.

What tools are available for businesses and practitioners to mitigate risks?

While these two trends—technological and regulatory—could both drive more law enforcement activity in this area, they still fall short of providing much guidance for what firms should do to mitigate both risks to consumers and risks of enforcement action against themselves. Ultimately, like other complex marketing activities that could serve both positive and negative outcomes, the basic questions in any given case are: which is it, and how strong is the effect?

Determining how contested practices do or do not affect behavior is an empirical question that can be evaluated using surveys or the analysis of consumer market behavior. In some cases, firms may already have data on consumer activity or perceptions that can be used to evaluate the effects of the implicated practices. For example, firms sometimes use experiments (often also called “A/B testing”) in the design, development, and deployment of marketing practices that can illuminate those perceptions or outcomes.

Much of the underlying research reported in the Report relies on such testing and there is no reason similar work could not be undertaken in cases involving allegations of dark patterns. As with other false advertising matters, survey research can be designed to assess what consumers understand about the information they are shown, why they make the choices they make (e.g., what factors are material to their purchasing decision), and how those choices would differ if they had been presented with alternative information.

Surveys with an experimental design (essentially, a test and control design) are a standard and commonly accepted way to test for causality. Specifically, an experimental design would use a “but-for” world to determine whether, how, and to what extent, consumers purchasing behaviors would change in the absence of dark patterns. This involves randomizing a set of consumers to navigate through a purchasing process as they normally would in the real world and randomizing a separate set of consumers (from the same sampling pool) to navigate through the same purchasing process, but in which the dark pattern(s) have been removed. By duplicating this environment “but-for” the dark pattern, and comparing the survey results across these two groups, we can assess the extent to which a single factor (here, the presence or absence of a dark pattern, or set of dark patterns) contributes to any differences observed in the outcome.

Similarly, analysis of consumer outcomes with and without the contested practices from either experimental or historical data can establish whether there are causal links between those practices and harm or benefit to consumers or groups of consumers.

Whether particular types of patterns are, on the whole, harmful to consumers is unlikely to be established with any formality given the number of additional factors (including consumer characteristics) that contribute to their level of success. Instead, it is likely the FTC and courts will continue to consider these matters on a case-by-case basis. In such instances, the use of a survey with an experimental design to establish whether consumers are likely to be misled, and survey or historical data analysis to determine whether their purchasing decisions would or did differ in the absence of particular patterns, could provide persuasive, empirical evidence regarding the potential effects attributable to dark patterns.

Where does this leave consumer protection stakeholders?

The amorphousness of the dark patterns concept—in law, economics and marketing—will continue to generate uncertainty as to what companies should, or should not, be concerned about. On top of that, the extraordinary dynamism of technological evolution in digital marketing underscores the importance of having that superstructure of principles for evaluation. Each new case against each new practice that goes beyond current understanding of unfairness and deception will need benchmarks for what constitutes violation to be established and will need to be evaluated in context against those benchmarks. The cost of that work will fall heavily on the first firms to be faced with dark patterns charges, whether they acquiesce or fight. However, we will not know whether those costs are worth it in terms of benefits to consumers and the market until we discover, through that process, what it is we are paying for.

    Author