chevron-down Created with Sketch Beta.

The Antitrust Source

Antitrust Magazine Online | December 2021

Dark Patterns or Savvy Marketing—Where is the FTC’s Focus on Dark Patterns Taking Us?

Randal M. Shaheen and Amy Ralph Mudge

Summary

  • Discussion of the term dark pattern, a trick used in websites and apps that make you do things you didn't mean to.
  • Dark patterns entered the academic and regulatory worlds in 2020 with FTC case against ABCMouse which made it difficult to cancel the subscription service.
  • California and New York along with the FTC have recognized dark patterns.
  • Dark patterns have gray areas and it can be difficult to determine between lawful and unlawful dark practices.
Dark Patterns or Savvy Marketing—Where is the FTC’s Focus on Dark Patterns Taking Us?
Melinda Podor via Getty Images

Jump to:

For those who follow the consumer protection space even casually, the term “dark patterns” has increasingly begun popping up. Others hearing the term may wonder whether there is a new Star Wars prequel or sequel coming out with dark patterns emanating from the “dark side” of the force. Regardless of which group you find yourself in, the term dark patterns is not yet well understood but one you should get used to hearing. Nevertheless, significant questions surround how such practices will be defined by the Federal Trade Commission (“FTC” or the “Commission”)—and perhaps the Consumer Financial Protection Bureau (“CFPB”)—for law enforcement purposes. In this article, we discuss the origin of the term dark patterns; its appearance in regulatory settings to date; how the term was defined in the recent FTC workshop on dark patterns; and significant questions that should be addressed prior to undertaking widespread enforcement actions, including those raised by the inclusion of dark patterns concepts in the FTC’s recently announced Enforcement Policy Regarding Negative Option Marketing.

The Dark Patterns Origin Story

In 2010, the term “dark patterns,” like many good colloquialisms, was coined by an Englishman. That gentlemen, Harry Brignull, received a Ph.D. in Cognitive Science and is an independent user experience consultant. He currently runs the website “darkpatterns.org and provides consultancy services to numerous companies. He was also a participant in the FTC’s recent dark patterns workshop, discussed below. In perhaps the clearest sign that dark patterns have gone mainstream, he also now advertises his work as an expert witness for “dark patterns and deceptive user experiences in digital products.” Darkpatterns.org defines dark patterns broadly (and sinisterly) as “tricks used in websites and apps that make you do things that you didn’t mean to, like buying or signing up for something.” So what does this mean as a practical matter? As is often the case with slick marketing, the use of the term dark patterns is in part an effort at rebranding. Several of the practices defined by the website as dark patterns are already ones that practitioners will recognize as practices long thought to be unlawful. For example, bait and switch and disguised ads are unlawful concepts long familiar to consumer protection practitioners. Other types of dark patterns are worded in such a way that they might describe a traditional consumer protection violation, but it is hard to know for sure. For example, the “trick questions” dark pattern might describe a misleading implied claim depending upon the manner in and extent to which consumers are actually “tricked.” “Sneak into Basket” might describe the practice of charging a consumer for a good or service without providing adequate notice and consent depending upon whether disclosure language is provided. Finally, other dark patterns may be familiar as tools used by marketers but their inclusion as practices that may also be unlawful is more surprising. For example, “confirmshaming” is defined as the “the act of guilting the user into opting into something. The option to decline is worded in such a way as to shame the user into compliance.” To date, at least, the FTC has not articulated shaming or embarrassment as a form of deception. There is also a dark pattern labeled “hidden costs” that relates to unexpected charges that appear at the last step of the checkout process. While there is currently litigation and debate over the inclusion of certain costs such as resort fees at the end of the checkout process, it would surprise most everyone to see “taxes” and perhaps even “delivery charges” as a type of dark pattern “hidden cost.”

The concept of dark patterns has also begun to appear in academic and other types of literature. In April 2019, Purdue University’s UX Pedagogy and Practice Lab created a website to discuss dark patterns. The Financial Times published an article in May 2019 entitled “When Manipulation is the Digital Business Model,” while the New York Times published a similar article in May 2016 entitled “When Websites Won’t Take No for an Answer.” Finally, in a study first posted in 2019 and later published in 2021, Professors Luguri and Strahilevitz at the University of Chicago authored a paper entitled “Shining a Light on Dark Patterns.”

Regulatory Appearance

Given the increasing prominence of dark patterns in the academic and policy worlds, it was only a matter of time before the concept crossed over into the regulatory world. The term “dark patterns” appears to have first transitioned from the ivory towers of academia to the penal towers of the regulatory world in the FTC’s 2020 $10 million dollar settlement with ABCmouse, a children’s online subscription based learning site, operated by Age of Learning, Inc. According to the FTC’s complaint, the company allegedly failed to adequately disclose that its twelve month and 30 day free trial memberships would automatically renew unless cancelled. Although ABCmouse also promised “easy cancellation,” cancellation was allegedly anything but easy. Consumers who tried to cancel through email, phone, or customer support were instead directed to the company’s online cancellation mechanism. The FTC also alleged that the cancellation link was difficult to find and not clearly marked. Further, the FTC alleged that consumers wishing to cancel had to first navigate six to nine screens, many with links that would take consumers out of the cancellation path and many of which also offered consumers inadequate information regarding how to continue the cancellation process.

Then-FTC Commissioner Chopra issued a separate statement regarding the case and the alleged use of dark patterns. Similar to darkpatterns.org, he described dark patterns as “design features used to deceive, steer, or manipulate users into behavior that is profitable for an online service, but often harmful to users or contrary to their intent.” He went on to describe them as digital tricks and traps that “involve an online sleight of hand using visual misdirection, confusing language, hidden alternatives or fake urgency to steer people toward or away from certain choices.” With regard to the FTC’s settlement with ABCmouse, he highlighted the company’s practices as falling within the “roach motel” dark pattern where it is easy to get in but almost impossible to escape. At the same time, Commissioner Chopra took to his official Twitter account and urged designers to come forward and file confidential complaints with the FTC concerning the use of dark patterns.

Not to be outdone, in March of this year, California hopped onto the dark patterns bandwagon by adopting a new regulation for its infamous California Consumer Privacy Protection Act (“CCPA”) that prohibits the use of certain dark patterns. The newly adopted regulation prohibits businesses from using an opt-out method “that is designed with the purpose or has the substantial effect of subverting or impairing a consumer’s choice to opt-out.” The regulation then provides the following illustrative examples:

  1. The business’s process for submitting a request to opt-out shall not require more steps than that business’s process for a consumer to opt-in to the sale of personal information after having previously opted out. The number of steps for submitting a request to opt-out is measured from when the consumer clicks on the “Do Not Sell My Personal Information” link to completion of the request. The number of steps for submitting a request to opt-in to the sale of personal information is measured from the first indication by the consumer to the business of their interest to opt-in to completion of the request.
  2. A business shall not use confusing language, such as double-negatives (e.g., “Don’t Not Sell My Personal Information”), when providing consumers the choice to opt-out.
  3. Except as permitted by these regulations, a business shall not require consumers to click through or listen to reasons why they should not submit a request to opt-out before confirming their request.
  4. The business’s process for submitting a request to opt-out shall not require the consumer to provide personal information that is not necessary to implement the request.
  5. Upon clicking the “Do Not Sell My Personal Information” link, the business shall not require the consumer to search or scroll through the text of a privacy policy or similar document or webpage to locate the mechanism for submitting a request to opt-out.

Although the regulation does not specifically mention dark patterns, the California Attorney General noted in a press release that the regulation bans “so-called ‘dark patterns’ that delay or obscure the process for opting out . . . .”

Additionally, Senators Warner and Fischer introduced legislation in 2019 that would have banned certain dark patterns, in particular manipulating consumers into providing personal information and designing online products that lead to compulsive usage by children. Senator Warner spoke at the recent FTC Dark Patterns workshop and specifically referred to the proposed legislation, which never passed, as an effort to combat dark patterns.

FTC Workshop

It is not surprising, therefore, that when Democrats took control of the FTC following the election of President Biden that all of these roads led to a half-day workshop, which was held on April 29 of this year, entitled “Bringing Dark Patterns to Light.” Then-Acting Chairwoman Slaughter opened the workshop by providing a broad definition of dark patterns, similar to that previously articulated by Mr. Brignull and Commissioner Chopra: “user interface designs that manipulate consumers into taking unintended actions that may not be in their interest.” The workshop was then divided into five panels. The first panel discussed how to define dark patterns and why are they employed and featured Mr. Brignull, as well as numerous academics. The second panel explored how dark patterns affect consumers and likewise featured numerous academics. The next two panels looked at how dark patterns impact communities of color and target kids and teens, while the final session looked at potential strategies for dealing with dark patterns.

With respect to next steps, workshop participants urged numerous courses of action, including further study by the FTC, the issuance of guidance, bringing enforcement actions, and the passage of additional legislation. While there has, as yet, been no formal follow up from the workshop it is clear that dark patterns are still very much on the FTC’s mind. For example, in her keynote address at the October 2021 National Advertising Division Annual Conference, Commissioner Slaughter spoke out against dark patterns with respect to opting out of data collection. Additionally, the FTC’s recent Enforcement Policy Statement was touted in a press release as fighting “illegal dark patterns that trick or trap consumers into subscriptions. As discussed in more detail below, the Enforcement Policy Statement takes aim at the fairly common practices of using hyperlinks to make disclosures and efforts to “save the sale” or collect information from consumers before permitting them to cancel subscriptions. Practitioners in this area are also aware of numerous FTC investigations focused on practices that could be classified as dark patterns. Before the FTC proceeds too far down this path, we believe that the agency should undertake further study and clarification surrounding which uses of dark patterns in particular may be problematic.

Questions Surrounding Targeting Dark Patterns for Enforcement Actions

It is one thing to criticize the use of dark patterns as a policy matter and quite another to suggest that they are all unlawful. To date, however, the FTC has broadly condemned dark patterns without indicating whether it believes all of the types of dark patterns identified by darkpatterns.org and other academics violate Section 5. This difference between practices criticized by policymakers versus those that are actually unlawful can be readily seen in the green marketing space. Numerous marketing practices are condemned as a policy matter by environmental activists as “greenwashing.” Some of them are in fact violations of Section 5 of the FTC Act and the FTC’s Green Guides. For example, one type of greenwashing is “the sin of fibbing” which applies to environmental claims that are false, such as falsely claiming Energy Star certification. There is little doubt that making false environmental claims is a violation of Section 5. Other types of greenwashing may not actually be unlawful. For example, activists might argue that it is greenwashing to market an SUV as having the best fuel economy in its class because no matter how good an SUV’s fuel economy might be it still guzzles gas compared to less thirsty compact and subcompact vehicles. This is commonly referred to as “the sin of lesser of two evils.” Yet, as long as the comparative claims are true there is nothing unlawful about advertising an SUV’s fuel economy as best in class. When thinking about policy concerns versus legal requirements, it is important to bear in mind that the FTC, unlike some other federal agencies, typically cannot implement its preferred policy outcomes, a point it has made frequently with regard to environmental claims. Thus, as much as the FTC may dislike certain practices, the FTC’s authority under Section 5 is limited to attacking practices that are “deceptive” or “unfair.” Each of these terms in turn have specific requirements. A deceptive practice, for example, must be one that misleads a meaningful number of “reasonable” consumers, while an “unfair practice must be one that creates substantial injury, is not outweighed by offsetting consumer or competitive benefits and finally, the injury must be one that consumers could not reasonably avoid.

To date, as can be seen above, the FTC has simply repeated the broad definition of dark patterns articulated by policy advocates—user interfaces that harm consumers by manipulating them to take actions they did not intend. The difficulty with such a broad definition is that while there are certainly practices falling within the broad umbrella of dark patterns that clearly violate Section 5, there are likely others that traditionally have not been viewed as unlawful and perhaps many that fall somewhere in between. Some dark patterns may be nothing more than effective marketing while others may reflect practices that have only been condemned by the FTC in more extreme circumstances. Condemning them in their entirety, however, would likely require the FTC to rethink its existing guidance and interpretations with respect to these practices.

There was, unfortunately, very little discussion during the workshop as to how one differentiates between lawful and unlawful dark practices, though one participant during the final panel did acknowledge that certain practices labeled as dark patterns may enjoy robust First Amendment protection and therefore may be difficult to legally attack.

Some of the dark pattern strategies, though, do fall into grayer areas. And so there’s just not a lot of case law on whether regulating obstruction would run aground of commercial free speech protections under the Central Hudson line of cases. Or is nagging protected by the First Amendment as a sales strategy? We just don’t have a lot of precedent there. We know that under certain circumstances, nagging isn’t protected by the First Amendment. Someone who asks out a coworker on a date once, probably protected, unless there’s a power disparity there. Someone who asks out a co-worker repeatedly despite refusals clearly isn’t exercising their free speech rights. They’re engaged in sexual harassment, if the requests are pervasive. So we do have some guideposts that we can look to from other areas of law. But I do think the First Amendment issues surrounding the regulation of obstruction, of nagging, of confirm shaming, and of certain kinds of subtle visual interference, those are questions that the FTC should spend some time thinking about and consulting with First Amendment experts as they try to regulate in this area.

If the FTC intends to move forward with enforcement actions targeting dark patterns (as it almost certainly does), it will be important for the agency to specifically define what types of dark patterns it views as unlawful and which may simply be undesirable; as well as articulating for particular practices that may sometimes be unlawful how one draws the line between what is permissible and what goes too far. As the FTC recently noted in its ill-fated argument to the Supreme Court in the AMG case, companies should understand what the FTC Act requires of them: “Courts are, of course, bound by principles of constitutional due process and notice. And if the court concludes that the [company] couldn’t possibly understand what was required of him, it will find that a remedy is not available.”

Providing clear guidance to advertisers and marketers, most of whom want to conduct themselves in a lawful manner, is particularly important with respect to dark patterns because as the workshop demonstrated, many practices which were condemned by panelists as dark patterns are commonly used marketing techniques heretofore understood to be lawful. If such practices are now to be condemned as unlawful, in whole or in part, it is important to provide a clear understanding as to the scope of and rationale for such condemnation. It is not only the FTC that should provide such guidance. Former FTC Commissioner Chopra is the newly confirmed head of the CFPB. Similar to the FTC, the CFPB has jurisdiction over deceptive and unfair as well as abusive marketing practices. Given that Director Chopra in his prior role as an FTC Commissioner spoke out numerous times against the use of dark patterns, it seems virtually certain that the CFPB will regulate to some degree the use of dark patterns in the financial services industry. One would hope that the FTC and CFPB will work together to regulate dark patterns in a uniform and consistent matter.

Below we address three key areas where guidance from the FTC, CFPB, or both agencies would be particularly welcome.

When does the use of advertising to “manipulate” consumers become an unlawful dark pattern?

The various definitions of “dark patterns” all share the common idea that consumers are manipulated into making choices that they might not otherwise have made, and that this choice results in some form of harm to the consumer. At first blush, manipulating consumers into purchasing a good or service that they hadn’t intended to purchase may sound terrible. But couldn’t this definition include much of traditional advertising? Doesn’t advertising often try to persuade consumers to purchase something they hadn’t intended to? Sometimes advertisers might accomplish that merely by listing a product’s or service’s performance characteristics or other objective criteria, but isn’t this also often accomplished through emotional appeals or other psychological forms of persuasion such as the use of certain colors or images, placement of text, or the location of goods in a brick and mortar store? After all, why does advertising so often include adorable animals or attractive people or patriotic themes? Sometimes advertising makes us feel good, and then we feel good about the goods or services the advertiser sells. And what about celebrity influencers or endorsers? Is it objectively meaningful that one of the Kardashians uses a product or is that endorsement intended to appeal emotionally or psychologically to the desire of people to emulate someone famous? Certainly, these techniques are intended in part to persuade a consumer to purchase something she otherwise did not intend to purchase and that unintended purchase could be viewed as a form of “harm” since a potentially unnecessary financial transaction has occurred. But has the consumer been manipulated in an unlawful sense or just artfully persuaded?

Of course, the increased extent to which data can be gathered with respect to individual consumers and the rise of digital advertising has likely accelerated the ability of advertisers to create personalized digital ads that feature images, colors, or content that is designed to specifically appeal emotionally or psychologically to its intended target. At what point, if at all, does the use of such techniques become unlawful. In what was viewed at the time as science fiction, the 2002 movie “Minority Report” featured Tom Cruise in a plot involving “precogs” who could so accurately predict the future behavior of people that individuals were arrested before they could commit a crime. Suppose someday soon advertisers could acquire so much data about individuals that sophisticated algorithms could accurately predict exactly what advertising techniques would persuade a consumer to purchase the advertiser’s good or service. Many of us might agree that the use of data and algorithms to that extent goes too far in overriding consumer autonomy and free will. Yet if showing the occasional cute puppy is ok but presenting a consumer with a digital advertisement so full of their personal emotional and psychological triggers that the consumer all but loses her autonomy goes too far, where does one draw the line between these two extremes? The FTC’s focus on dark patterns raises important questions regarding consumer manipulation but also creates an incredibly difficult line drawing exercise.

In a related vein, when, if ever, is it relevant that a particular technique will increase the rate of consumer purchase or selection? Suppose market research shows that consumers are more likely to purchase an article of clothing described online as “forest green” rather than just green? Is the use of forest green then a dark pattern? What about the grocery store that puts the discounted diapers at the back of the store so that sleep and financially deprived parents might pick up additional items on their way to and from the back of the store? Is this a non-digital dark pattern? What about the order in which options are presented? Consumer surveys typically rotate responses to questions because of a belief that order can bias response selection. Checkout pages for goods or services, however, that may present consumers with a variety of potential add-ons don’t rotate whether the first option is to accept or reject the additional purchase. Is it an unlawful dark pattern if the advertiser presents the desired selection in an order that makes it more likely to be selected?

One of the workshop participants, Professor Strahilevitz, presented published research on the use of dark patterns and how they might impact consumer choice. In the control group respondents were told (based upon their responses to various questions) that they were identified as very concerned about their privacy and were given the option to accept or reject a privacy protection program. There were also two test groups, described as mild and aggressive dark patterns. In the mild group the “accept” choice was now in red and labeled as the “recommended” option. In addition, respondents who did not select “accept” were then asked to confirm that they “do not want to protect my data or credit history.” In the aggressive dark patterns group, respondents went through the same initial sequence but if they continued to reject the privacy protection program they were shown up to three screens with information about the prevalence of identity theft and the problems it causes and each screen could only be bypassed after ten seconds had gone by. The results are probably not surprising. In the control group, only 11% of respondents accepted the privacy protection program; that percentage more than doubled to 25% in the mild dark patterns group, and rose to at least 37% in the aggressive dark patterns group.

This research demonstrates that these techniques are effective, but what, if anything, should these conclusions tell us about whether the use of these and similar practices is unlawful? As described above, advertisers have from the very beginning utilized techniques designed to impact consumer choice and cause them to buy more of the advertised product. Indeed, increased sales is the very hallmark of a successful advertising campaign. Which brings us back to the standards of “deception” and “unfairness.” Is it enough to argue that these techniques are not unlawful because they do not mislead? How much burden should consumers continue to shoulder to read advertising carefully? Darkpatterns.org faults advertisers for taking advantage of consumers who “don’t read every word on every page” but who rather “skim read and make assumptions.” Does this mean that every advertiser who takes advantage of that fact has violated Section 5 or do consumers “skim” and “make assumptions” at their own peril? Darkpatterns.org identifies the following as a form of “trick question” because the first choice is “opt out” and the second is “opt in.”

Trick Question

Trick Question

While the advertiser may have determined that the wording and order of these two choices may prompt more consumers to request additional offers, is there anything actually misleading about the options that are presented? Are not the two choices presented in a sufficiently clear manner? Even if one concedes that fact and relies instead on the “unfairness” prong of Section 5, there are still significant questions to address, including whether the third prong of the unfairness test can be satisfied (whether a consumer could have readily avoided any harm (for example, by reading the choices with reasonable care)) and how one is to distinguish an unfair ad from a fair one—assuming one rejects the idea that simply showing an increase in consumer selection or purchase is by itself sufficient.

When does “obstructing” the ability of a consumer to make a “choice” become a dark pattern?

Dark pattern opponents also point to the practices of making cancellation more difficult or making it harder to exercise one choice over another as dark patterns. Until recently, however, the FTC has never articulated a principle that cancellation must be as simple as possible or that the selection of competing options must be equally simple. For example, the Restore Online Shoppers’ Confidence Act (“ROSCA”) requires that the cancellation mechanism for a subscription program must be “simple.” “Simple,” however, is presumably not synonymous with “simplest.” Means of cancellation can differ as far as simplicity in two regards. First, the avenues by which cancellations can be effectuated can vary in simplicity. For example, for the most part online cancellation is generally simpler (and quicker) than cancelling over the phone. In addition, for any given means of cancellation, the cancellation path can differ in simplicity; for example, does the merchant try to persuade you to change your mind. Some state laws have addressed these questions directly. For example, the New York law regulating automatic renewals requires that a company provide an online method of cancellation for all consumers who signed up for the subscription program online. In an analogous situation the CCPA (discussed above) does not permit companies to require consumers to view reasons why they should not opt out of data sharing prior to the consumers exercising their option to opt out. Similarly, the steps required to opt in to data sharing must be the same as the steps required to opt out.

Unlike New York and California, the FTC is constrained by the language used by Congress when it passed ROSCA which only mandates a “simple” method of cancellation. However, the FTC’s recent Enforcement Policy Statement suggests that the FTC is prepared to aggressively interpret the word “simple,” particularly when it comes to dark patterns which the Commission believes may make it more time consuming and cumbersome for consumers to cancel subscription services. With respect to how one cancels, the Enforcement Policy Statement states that whatever means is used to sign up for the subscription must also be available as a means of cancellation. While there may be some logic and symmetry to this approach (and certain clarity) is there really any basis to argue that a means of cancellation which may be different than the method used to sign-up is inherently not “simple?” Second, the Enforcement Policy Statement tries to provide more guidance around what, if any, additional questions or prompts are permissible when a consumer attempts to cancel a subscription service. The ABCmouse case, discussed above, with its six to nine additional screens and hard to find and understand prompts is likely a poster child example of making cancellation difficult, but it provided no real guidance on what companies can do short of the practices condemned in that case. The Enforcement Policy Statement states that “negative option sellers should not subject consumers to new offers or similar attempts to save the negative option arrangement that impose unreasonable delays on consumers’ cancellation efforts.” In a footnote the Enforcement Policy Statement notes that “[w]hile a request to consider an offer or discount would not amount to an unreasonable delay, multiple requests for a consumer to listen to additional offers, lengthy pitches, or ignoring a consumer’s request to decline further offers could amount to an unreasonable delay.” Thus, a one-time discount offer is presumably permissible but not multiple such offers? Is offering a discount and then yet another discount no longer a “simple” means of cancellation? Is this an unlawful dark pattern or does it actually benefit consumers by permitting them to retain a good or service that they like but decided they could not afford, even if the first discount was not sufficiently attractive? What about asking consumers why they are cancelling or under what circumstances they might consider signing up for the good or service again? Are these unlawful dark practices? The former might help the company to improve its product or service or help them decide what “save” offer to present to the consumer, while the latter may facilitate the consumer taking advantage of the good or service again at a more opportune time. Yet the Enforcement Policy Statement would not seem to permit such questions so as to save the consumer a few seconds of her time. Of course, consumers could be presented with these choices after they cancel, but the reality is that most consumers are likely to quickly close the page once cancellation is complete. Finally, the Enforcement Policy Statement states that it is an unlawful dark pattern and violation of ROSCA, if, for example, sign ups and cancellations are done via the telephone but the calls to cancel are lengthier or otherwise more burdensome than the telephone call the consumer used to consent to the negative option feature. Presumably this likely includes average time on hold as well. Does the fact that hold times may be thirty seconds longer to cancel make the cancellation process no longer “simple?” In summary, if the sign up and cancellation processes do not treat consumers neutrally, in the current FTC’s eyes, the cancellation process is a dark pattern, not simple, and a violation of ROSCA. Time will tell whether courts will uphold this perhaps strained definition of “simple.”

The emphasis on practices that make it harder for a consumer to opt against the purchase of a good or service also raises questions regarding the use of disclaimers and hyperlinks. The FTC’s “.com Disclosures” guide permits both under some circumstances. However, disclosures must be “clear and conspicuous,” and hyperlinks must be clearly labeled. In practice, however, disclosures and hyperlinks often contain limitations or qualifications to offers that make those offers less attractive to consumers; for example, limitations on a money back guarantee or on a product’s advertised performance. As a result, in some cases a consumer’s review of this information might make her or him less likely to accept the offer. It is perhaps for this reason that the Enforcement Policy Statement appears to retract the .com Disclosures acceptance of hyperlinks. It states: “a disclosure is not clear and conspicuous if a consumer needs to take any action, such as clicking on a hyperlink or hovering over an icon, to see it.”

In yet another sign that under the guise of regulating dark patterns the FTC is moving toward a standard that companies must act neutrally toward consumers and that language and disclosures must not just be “clear” but as “clear as possible,” the Enforcement Policy Statement also requires that for an advertisement with both audio and visual components, any disclaimers must be made in both forms even if the claim itself only appears in one. If a claim is made visually and the disclaimer also appears visually is that not “clear.” Is there truly a legal basis under Section 5 for these requirements that consumers must be presented choices in as neutral and clear a manner as possible?

While these are two fairly broad examples, there are also examples where advertisers may unnecessarily deprive consumers of some of their time. Will the FTC’s concerns with regard to consumer time extend to more than just negative options? For example, when booking travel, consumers are often required to indicate whether they wish to purchase optional travel insurance. Making that choice typically requires only a few seconds (unless the consumer forgets to do it and is bounced back) but it’s not a necessary requirement. The website could just as easily permit consumers to add on the insurance without also requiring them to reject it. If the FTC is going to focus increasingly on practices that cost consumers time but not money then it ought to provide guidance on how it will determine when, and if, it is unlawful to unnecessarily impose upon a consumer’s time, lest well-intentioned companies be left in the dark.

How does the FTC intend to determine whether vulnerable groups, but not the population in general, are being misled by dark patterns?

As noted earlier, two of the FTC workshop panels discussed the disparate impact of dark patterns on communities of color and children and teens. With respect to communities of color, panelists noted, for example, the difficulty individuals for whom English is a second language may have in navigating online sites and exercising choice. Another panelist pointed to research showing that certain groups such as the less educated tend to have lower levels of privacy literacy, making them less equipped to protect their privacy online. Finally, a panelist noted the difficulty some older Americans may have understanding concepts like embedded links or seeing click boxes or icons with lower levels of contrast.

While the FTC has long considered whether an advertisement is deceptive from the standpoint of the reasonable consumer, it has also recognized that an advertisement may be deceptive even if it only misleads a subset of the general population. The FTC’s Policy Statement on Deception, for example, notes that “[w]hen representations or sales practices are targeted to a specific audience, such as children, the elderly, or the terminally ill, the Commission determines the effect of the practice on a reasonable member of that group.” Note, however, that in order for the Commission to apply this more limited analysis the advertisement in question must be “targeted to a specific audience.” In the case of children that limitation may not matter as digital ads directed primarily to children seem unlikely to have broad appeal outside of that more narrow group. However, while ads for certain products can be targeted to communities of color or teens there are also numerous ads targeted to broad demographic groups that will also be viewed by significant numbers of both groups. For example, an advertisement for a car would not ordinarily be thought of as “targeting” communities of color but there may be a significant number of people within that community who purchase—or consider purchasing—that particular car.

Of course, if there is substantial and persuasive evidence that communities of color and children and teens are more likely to be misled by the use of certain types of dark patterns that is cause for concern—even if the advertisements in question are not specifically targeting those groups (or even intending to mislead those groups). However, the practical difficulties associated with compliance and enforcement are likely to be substantial. How will the agency provide meaningful guidance on what practices industry should avoid and under what circumstances, particularly for non-targeted ads? Second, how will advertisers reasonably determine whether a general audience advertisement that is not misleading to the population as a whole might nevertheless be misleading to one or more communities of color or children and teens? Determining the implicit takeaway from advertising is not an easy thing even under ideal circumstances but at least with a general audience advertisement companies only have to wrestle with the takeaway from the population as a whole. How can advertisers readily ascertain the more narrow question as to whether a particular community of color, age group, gender, or other subgroup might takeaway an implicit misleading message? Focus groups and qualitative interviews might provide some insights but their methodologies are not typically considered sufficiently robust to provide definitive evidence of consumer perception. Well-conducted consumer surveys are generally considered reliable means to determine consumer takeaway, but surveys are time consuming, and the cost can easily reach well into the five figures. Further, multiple subgroups would likely have to be surveyed, potentially raising the costs well into the six figures. All of this is time and money that most consumer products companies likely cannot afford every time they want to launch a digital ad campaign.

Conclusion

Dark pattern critics and the FTC should be applauded for the work they have to done to highlight how digital techniques can be used to mislead and confuse consumers. However, as the conversation around dark patterns moves out of academia and into the enforcement realm, there is still much work to be done to more clearly define, as articulated by Professor Strahilevitz at the FTC’s workshop, when such techniques are unlawful and when they are simply non-deceptive marketing techniques fully protected by the First Amendment. Those who have researched dark patterns, regulators, and industry should all be brought into the conversation so that clear and well thought out guidance can be given, particularly with respect to dark patterns that have not traditionally been thought of as unlawful, either in whole or in part. Well-intentioned advertisers should have the opportunity to understand in advance what the rules of the road are when it comes to digital techniques of persuasion. In combatting what it believes are harms that result from dark patterns the FTC may also find itself constrained by the fact that Section 5 restricts the agency to taking action against deceptive or unfair practices. Ultimately, legislation may be needed if the FTC and public policy advocates wish to proceed more broadly against at least certain dark patterns.

    Authors