AARON BURSTEIN: Let’s talk a little more about rulemakings. BCP has reinvigorated its rulemaking authority under Section 18 of the FTC Act, also known as the Magnuson-Moss Warranty Act, and initiated several Magnuson-Moss and Administrative Procedure Act [APA] rulemakings. What insights have you gleaned so far from these recent consumer protection rulemaking efforts?
SAM LEVINE: I should start by saying that I have heard debates: “Should the FTC be doing rulemaking or should the FTC not be doing rulemaking, especially on the consumer protection side?” I don’t think that debate makes sense. I don’t really understand it.
Rulemaking is one of a number of tools we have in our toolkit. There is no question that we have authority to do consumer protection rules—we have had it since the early 1970s—but we also have authority to obviously bring federal cases, bring administrative cases, market studies, et cetera.
I welcome debate on individual rules: Does it make sense to move forward with an “impersonator” rule, to ban impersonator fraud? Does it make sense to move forward with a fake review rule, to ban fake reviews? Let’s have those debates.
I think one of the great things about the rulemaking process is that it is participatory. We hear from a broad swath of the public and from the business community and that ends up making our work better and informing our decisions. But let’s have that debate about individual rules rather than the concept of whether the agency can use its tools.
Two primary things I think drove the decision to launch more rulemakings over the last couple of years.
The biggest one, as you said and we have talked about, is AMG. We finalized the impersonator rule a couple of months ago and we have already brought a case. We are already using that rule to get money back to people who got cheated by an impersonation scam. The work we are doing to enforce these rules and get money back to people is so critical.
The second big reason we are doing it is that we are seeing persistent problems in the marketplace that enforcement is not solving. We have the best lawyers in the country. Mike Pertschuk, the former Chairman, called us the “greatest public interest law firm in the country.” I agree with that, and I am very proud of our track record in court. But when you look at problems like subscriptions that people cannot cancel, fees that people cannot avoid because they don’t see them upfront, and the proliferation we are seeing of deceptive reviews, these are problems we have been warning about for decades, we have been bringing lawsuits for decades, we have been issuing guidance for decades, and the problems are not going away and by many measures are getting worse.
So the question to me is: Congress gave us the tool to address the problem of unfair deceptive practices in the marketplace through rulemaking; do we just leave that rule off the table or do we inventory all of our tools and consider what rulemaking can allow in terms of remedies for consumers and what rulemaking can do in terms of making markets more fair and honest?
There is no question that we have done more of it. It has taken resources, but if you think about rules like the Telemarketing Sales Rule and now the Impersonator Rule and you think about how many cases they have allowed us to bring and how many tens of millions of dollars these rules have allowed us to return to consumers, I think it is a real investment in the future of our agency’s ability to do our job and get money back to the public.
AARON BURSTEIN: Just to ask about one particular rulemaking that is pending, in August of 2022 the Bureau launched a broad commercial surveillance and data security rulemaking, and that generated a lot of attention. What is the status of that rulemaking, and should we maybe expect to see a proposed rule in the next few months?
SAMUEL LEVINE: I am not going to make any predictions about where that is going to go, that is obviously up to the Commission, but I think it is worth pausing and looking at what we have done in the months since we launched that rulemaking in our privacy work. We have brought a series of groundbreaking cases.
One of the reasons we launched that privacy rulemaking, and we said so at the time, is because there was consensus at the Commission—certainly our Chair felt strongly about, I feel strongly, and Commissioner Slaughter has spoken about this —that the notice-and-consent model of privacy, where we are counting on consumers to read disclosures and we are pretending they have a choice and can somehow opt out of using digital services, has been a fiction, that regime has been a failure, and we opened that rulemaking process to explore whether we should issue rules that provide more substantive protection for people’s data.
If you look at the cases we have brought over the time since we launched that rulemaking, we are already moving in that direction in really significant ways. We have now brought I think five cases against companies like GoodRx, BetterHelp, and Premom banning companies from sharing sensitive health data for advertising purposes. We have brought two cases banning data brokers from sharing sensitive geolocation data; we are in litigation with a third. We brought a case banning a company from sharing browsing data. We have brought the largest children’s privacy action in our history against Epic Games and other significant children’s privacy cases against Amazon Alexa and Microsoft Xbox. We have also moved forward on other rules including proposing a much-strengthened children’s privacy rule in December. And we finalized the updated [Gramm-Leach-Bliley Act] Safeguards Rule and our updated Health Breach Notification Rule.
So we are moving on a number of fronts to try to strengthen protections for people’s data and strengthen protections for people’s privacy. Rulemaking is one of them. I am not going to predict where or when that will go, or if, but we are certainly proud of the track record to date, and that is going to inform all of our work going forward.
JANIS KESTENBAUM: There is no question that the Bureau has been incredibly busy and productive on the rulemaking and enforcement fronts. What, if anything, do you see as the potential impact of the Supreme Court’s decision in the Loper Bright case and the end of Chevron deference on the Bureau’s work in either the rulemaking or enforcement arenas?
SAMUEL LEVINE: Obviously, we are paying close attention to what the Supreme Court is saying and doing, especially when it comes to administrative agencies. We saw a lot of action on that over the last few months.
At the end of the day—and we have said this publicly—we have not generally relied on Chevron deference to defend our rulemakings or enforcement. Obviously, Chevron deference no longer exists, but Chevron comes into play and now Loper Bright comes into play when statutes are ambiguous.
But I don’t think there is any debate, I certainly have never heard any lawyer make the argument—and that says something because I have heard a of lot of wild arguments in this job—that the FTC doesn’t have explicit statutory authority to issue consumer protection rules upon a finding that practices are unfair or deceptive and that they are prevalent in the marketplace.
So I don’t think there is any ambiguity there, and we are fully prepared to defend our rulemakings against any standard of review because certainly all of our rules that have gotten to the proposal stage are deeply grounded in the records we have built, cases we have brought, information we have gathered, and expertise that we have. So we are confident in our position on these and confident in the positive impact they will have on people’s lives.
JANIS KESTENBAUM: Moving on to an area where the FTC has been active, so-called “junk fees,” please explain what junk fees are, what distinguishes a junk fee from a non-junk fee in your view, and what BCP has been doing in this area?
SAMUEL LEVINE: “Junk fee” is just a term to describe a phenomenon that most people understand in a very visceral way. The way we have defined it in our rulemaking is that it is a fee that you don’t know you are going to get because it is hidden—maybe it is at the end of the transaction and is not part of the advertised price—or companies are lying about what a fee is for. Let’s say a company says a fee is a “government” fee for a tax, and it’s not, it is just to line the company’s pockets. Those are junk fees.
This term I recognize is new but this concept and the notion that people are getting ripped off by these fees is not new. I think it has gotten a lot worse, but it is not new.
The FTC has been holding workshops, issuing studies, and bringing cases on, for example, “drip pricing” for many, many years, which is the same concept. We have been bringing cases on hidden fees for many, many years, for decades. We have been bringing cases when companies misrepresent their fees.
In the fall of 2023 we proposed a ban on these fees, saying prices need to be advertised upfront and you cannot misrepresent what they are for. We cited a long history of FTC enforcement in this space. So we think, even though the term is new, the concept that companies need to be upfront with their prices is not new.
It has been striking. We have gotten thousands of comments in response to our proposal. We take all of them seriously, including comments from the business community and comments from consumer advocates, but sometimes the most enjoyable comments to read for me, the most compelling comments, come from ordinary people who are getting ripped off every day by these fees.
As I said earlier, we have tried enforcement and we have tried warnings. We are going to continue doing enforcement, we are going to continue issuing warnings—for those reading this interview, don’t hide fees, don’t misrepresent fees—but at the end of the day we are now considering whether a bigger step needs to be taken to root these unlawful practices out of the economy.
It should be noted that this is an FTC effort, an effort that we decided to launch independently, but there is a government-wide effort as well to track down these fees. You see this with the Department of Transportation on the airline junk fees. You see it with the Consumer Financial Protection Bureau, which has been a real leader on this, on credit card late fees and overdraft fees. And you are seeing work by the FCC and other agencies as well. I think everyone across the government, state and federal, sees this as a problem, ordinary people see this as a problem, and all options are on the table for what the FTC is going to do to root out unlawful practices in this space.
AARON BURSTEIN: Another large part of BCP’s work of course involves privacy—and you addressed some of this already—but under your leadership and Chair Khan’s leadership there has been a shift away from notice and choice —you called it a “fiction”—and with that a shift away from standalone deception theories under Section 5. Can you say a little bit more about your diagnosis of notice and choice’s problems and the broader reasons for this move?
SAMUEL LEVINE: I have done some research on this. You go back a couple of decades, and what you heard people saying, including leaders at the FTC, was that a company should post their privacy policies and consumers should take responsibility to protect themselves by reading them. Maybe that made sense two decades ago—I am not so sure—but that is ridiculous today. It is completely ridiculous. I cannot think of another market where we are counting on consumers alone—it is literally impossible to hold down a full-time job and read privacy policies for every digital service one uses.
By the way, it would not be a very good use of your time because companies have taken the position—and we are seeing this now with training artificial intelligence [AI] models—that they can change these policies whenever they want. So you read them today, and some companies will say, “Well, we might make changes and you should keep reading our privacy policies regularly if you want to know about them.” This is completely ridiculous.
What we have said is that our unfair practices authority not only prohibits deception—deception is encompassed by unfairness—but it also prohibits practices that harm consumers, that are not reasonably avoidable, and that don’t allow benefits to consumers of competition, and that can include certain data-sharing practices. We have alleged that excessive retention is unfair. We have alleged that sharing sensitive health data or geolocation data for advertising purposes or for other purposes is unfair.
This is not about moving away from deception. We are still bringing a lot of deception cases, but I think the shift has been that we are not just looking to see what the company is representing and whether it is true. We are really trying to scrutinize the underlying practices with respect to people’s data to see if they are harmful. In instances that we find they are harmful and that we believe they meet the three-part unfairness test under the FTC Act, the remedies we are seeking are really significant.
Look at our health privacy cases. We are not telling companies, “Oh, you should just disclose better that you are going to be sharing people’s medication information or mental health treatment information for advertising purposes.” We are telling them they have to stop. We got orders against BetterHelp, GoodRx, Premom, Cerebral, InMarket, X-Mode, and other firms where we are putting bright-line prohibitions on the sharing of sensitive geolocation data, sensitive health data, sensitive browsing data in the case of Avast, for certain purposes.
Again, the notion is that we are not just trying to bury people in more disclosures. We want a recognition by companies that reckless data-sharing practices can be harmful to people, on their privacy, on their dignity, on their wellbeing, and that they should stop. So it is not moving away from deception; it is embracing the “U” in addition to the “D” in our unfair, deceptive, or abusive acts or practices [UDAP] authority.
I will just add briefly to that. Folks who have been around the ABA for a while know that unfairness has been historically, certainly in the 1980s, a controversial topic at the FTC, but look at how much bipartisan support we have gotten for our unfairness work. Our Kochava case, which was unfairness only, was supported in a bipartisan way. We brought a case this summer against NGL, an anonymous messaging app, that was largely grounded in unfairness; that was bipartisan. Health privacy cases like GoodRx are bipartisan, and Commissioner Wilson issued a very supportive statement. There is a recognition across the political spectrum that notice and choice is failing and that there is more that government can do to protect people’s privacy, so I really see these changes we are making as durable—we can talk about this later—regardless of what happens in November.
AARON BURSTEIN: Let me ask a little bit about what is happening in the states with privacy. As you know, nearly twenty states have enacted comprehensive consumer privacy laws over the past few years. How has that development affected the FTC’s approach to privacy enforcement in terms of the agency’s enforcement focus as well as working with states?
SAMUEL LEVINE: We welcome states moving forward with privacy laws. I think they feel a sense of desperation to do it. Again, I recall what was happening in the early 2000s when states were on the frontlines of trying to protect people from financial abuses and banking regulators not only were not helping the states but in some cases they were actually working to block state action to crack down on predatory lending through preemption.
That is not the attitude the FTC is taking. We issued a report a couple of months back on our collaboration with state enforcers, and that collaboration can certainly include upon request working with them on privacy legislation, working with them on junk fee legislation, and working with them on “click to cancel” legislation—you name it. So, as you see these states move forward with strong data protection bills in many cases I think that is increasing pressure on Congress to act, which I think is a good thing.
It is also, by the way, gratifying to see that a number of states are moving forward not with the model grounded in notice and consent but with real strong data minimization standards that actually limit harmful collection and retention of people’s data.
So, I am encouraged by the momentum in the states, both red and blue, to pass privacy protections for their citizens. I am also encouraged by the way that there is now bipartisan support in Congress for comprehensive privacy legislation.
But, to be very clear, federal legislation is overdue. The FTC has been saying that for more than a decade. The fact that so many states are moving to pass these protections should send a strong message to Congress that this is what the people want, this is what our country needs, and they should move forward and get comprehensive legislation across the finish line.
JANIS KESTENBAUM: You mentioned congressional support for comprehensive privacy legislation. There has obviously been some recent activity in the Senate on online safety legislation for teens and kids in the form of the Kids Online Safety Act [KOSA] and related bills, but the FTC has been active in this area using its Section 5 authority. What has been your approach to online safety for teens and kids in this area?
SAMUEL LEVINE: It is an excellent question. If you look at KOSA, you look at the Children and Teens’ Online Privacy Protection Act (COPPA 2.0), and you look at the American Privacy Rights Act, in bill after bill, who does Congress trust to enforce these laws, who does Congress trust to administer these laws? The FTC. I think that is a bipartisan vote of confidence in the work we are doing.
The work we are doing on this is really quite considerable. First, we have been aggressive, and I think successful, enforcers of COPPA [the Children’s Online Privacy Protection Act]. COPPA protects kids under thirteen, and as I mentioned earlier, we brought the largest COPPA case we have ever brought against Epic Games, Microsoft Xbox, and Amazon Alexa.
I also want to draw your attention to what we have done to protect teens because I think this is really groundbreaking. We hear a lot of concern on the Hill, among psychologists, from the Surgeon General, and from Commissioner Bedoya who has been such a leader on this issue, about the threats to teen mental health that social media can pose and the harm this can cause for families and for communities. We have been very upfront on this.
I am particularly proud of the case I mentioned earlier, NGL. This was an anonymous messaging app that was targeting high schoolers in its marketing. It was encouraging high schoolers to ask each other questions like, “Are you straight? Who do you have a crush on?” and things like that. Sometimes the company itself was sending messages to teenagers pretending to be other people, pretending to be their friends, just to drive engagement with the service.
We brought a lawsuit alleging that they were violating COPPA, the children’s privacy law; they were violating the Restore Online Shoppers’ Confidence Act [ROSCA], the subscription law; and, I think most interestingly, that their marketing to teens was unfair under the FTC Act given the immense amount of cyberbullying and harassment we were seeing.
The order in that case prohibits NGL from having teens on the platform or from marketing the platform to teens. Again, that order got unanimous support at the Commission. I think that is a sign of how strongly people across the political spectrum feel that some of these online services are out of control when it comes to the threats they pose to teens.
I am also really proud that in our Epic Games case I mentioned that a big part of it was COPPA, kids under thirteen, but we also alleged that the company’s default settings with respect to teens, which allowed live text and voice communications amongst strangers, posed unique threats to teens, and we got those default settings changed in the order. We have a track record now of obviously continuing to make COPPA a major priority. We are also taking a more holistic look at what harms people who are over thirteen might face. The fact is that when a kid turns thirteen all of the harms they can face online don’t go away. And our authority does not go away either because in addition to COPPA we have that unfairness authority, we have that deception authority, and over the last couple of years we have used it to expand our notion of who needs protection online and what steps companies need to take to be responsible with their services.
JANIS KESTENBAUM: No discussion with the FTC in 2024 would be complete if it did not mention artificial intelligence, which is obviously an extraordinarily hot topic these days. What do you see as the main benefits and risks to consumers from this technology, and what is the Bureau’s approach to addressing the risks?
SAMUEL LEVINE: We could obviously have a whole segment—I have done a few—on the benefits and risks of artificial intelligence. It promises enormous benefits potentially to our economy, to health research, and to our productivity. At the agency we welcome innovation in this space, and on a personal level I am proud that so much of that innovation is happening here in the United States.
But we also have to be mindful of the harms this can pose. I do not think we want to repeat the mistakes of Web 2.0, when the government I think took a hands-off approach to social media, and now here we are two decades later playing catchup. All the work I just talked about trying to protect teens and kids I am very proud of, but let’s be honest: We are playing catchup. These services are deeply entrenched and a lot of harm has already been done. When it comes to AI we want to be proactive in making sure that companies that deploy these powerful tools are doing so responsibly.
I think frankly we have been a little ahead of the curve on this. We issued a major report on potential benefits and risks of artificial intelligence in the summer of 2022, before ChatGPT and other services took off, and over the last year and a half we have been quite proactive on the enforcement front.
For example, we brought a major case against Rite Aid. The company was using AI facial recognition technology in a way that was improperly tagging in particular women and people of color as shoplifters. They were being detained in their stores and asked to leave, a huge embarrassment and harm, and that was because of the reckless use of this AI technology. Our case brought a stop to that and banned their use of AI facial recognition for five years. And to the extent that they use other biometric systems, it required an intensive monitoring program to make sure that these systems were not producing inaccurate or biased results.
We have also been really aggressive on fraud. We know that fraudsters are often early adopters of new technology. We saw that on the internet, we saw that on social media, and we are seeing it with AI. We are not just waiting around for scams to spread.
We finalized earlier this year an Impersonation Rule, which is going to be a hugely important rule. It prohibits the impersonation of government and business. We have proposed to also expand it to cover impersonations of other people. That is going to be a critical tool for us to take on AI-related fraud.
We made clear that the Telemarketing Sales Rule can cover AI robocalls. We have brought cases against companies like Automators AI, which was falsely promising that they were going to get people rich with AI.
One initiative I am particularly proud of is our voice-cloning challenge [VCC], where we solicited ideas from the public. Voice cloning is a huge risk because you post five seconds of your voice on social media and someone could grab that and use it to train an AI model that can then impersonate your voice. You can imagine all of the ways that could be used to commit fraud—say, pretending to be a grandchild in need of money calling a grandparent. We launched a challenge to invite the public to give us ideas—technological and policy ideas—for how to interrupt this fraud, how to stop this fraud, and we got some exciting submissions. We announced five winners earlier this year.
So we are using all the tools in the toolkit, including market studies, enforcement, rulemaking, challenges like the VCC, and business guidance. We have put out award-winning business guidance on how companies should think about their use of AI in the context of the FTC Act and how the law is being enforced. I feel really proud of the work we have been able to do so quickly in the face of these challenges and am confident that this work is going to continue over the years to come.
Our extraordinary staff did not learn about AI two years ago, it is something we have thought about for a long time. So when this started to be deployed more widely two years ago we were ready, and I think we have shown how ready we were in our actions to date.
AARON BURSTEIN: Sticking with AI for another minute, Chair Khan and other FTC officials have voiced some concerns about the competitiveness of inputs to develop generative AI and having access to AI inputs. Is BCP coordinating with the Bureau of Competition [BC] or otherwise taking those sorts of competition concerns into account in BCP’s work?
SAMUEL LEVINE: We are. Another thing I am proud of is—if you look, for example, at what the Chair has said on open models, including in speeches and we also put out a staff blog post on this—that these are part of the cross-agency efforts. When we did a blog post on this, you had staff from the Bureau of Competition, staff from our outstanding Office of Technology who took the lead on it, and of course staff from the Bureau of Consumer Protection, who contributed to it.
I think what the Chair really wants to ensure is that we do not see again a repeat of Web 2.0 with a handful of tech giants dominating the space and major barriers to entry for startups. We want to see a competitive—this is not my area, but I believe it—landscape for the development of AI tools across the staff, but we also want to make sure that these tools are not being used for fraud or being deployed recklessly, so there is some balancing there.
I think what you have seen in our work to date and what you are going to see in our work in the coming months is a real effort to balance carefully these competition goals, which are important, and also making sure that companies are deploying these tools responsibly and exercising sufficient control over them to make sure they do not get into the hands of scammers and other actors who can do us harm.
People have talked about whether there should be an AI agency—that is up to Congress—but the deep expertise our agency has across markets as well as our dual mission of having both consumer protection authority and competition authority, to say nothing of the world-class team of technologists we have been able to recruit, I think positions us uniquely to be a leader in this space, and I think we have already shown that leadership over the last two years.
AARON BURSTEIN: Looking at the relationship between BCP and BC a bit more broadly, historically their work has been fairly siloed. Has that changed under your and Chair Khan’s leadership?
SAMUEL LEVINE: It is changing, yes. Some of that is visible, some of that may not be visible yet, but we are seeing a lot more coordination between BC and BCP. I have a pretty good sense of the investigations underway in BC, I think BC leadership has a pretty good sense of our investigations, and there are also areas that we are looking at together. A lot of issues that the FTC works on do not sound strictly in consumer protection or strictly in competition.
Think about right to repair. I am particularly proud of the BCP role in driving Americans’ ability to repair their products, but this is a cross-agency effort. BC wants to make sure that repair restrictions are not being used to monopolize industries. They want to make sure that independent repairers can compete. BCP wants to make sure consumers have a choice and that consumers can go to different repairers to get their products serviced without firms violating the Magnuson-Moss Warranty Act or other laws we enforce. We have our Office of Policy Planning where we have worked with them to support state efforts on right to repair. We have our Office of Technology, which has written and worked with us on a lot of the specific tech issues, like around smartphones, that we have seen in the right-to-repair space. Right to repair is just one example.
Another is franchising. We had an announcement a couple of weeks ago that was joint between the Bureau of Consumer Protection and the Office of Policy Planning, on issues that do not sound in one vertical or another that really raise concerns around competition and consumer protection.
I will just say that where we have been most successful in doing that is forming organic relationships across the Bureau, breaking down those silos—sometimes formally, but I think it is even more important to break down those silos informally—so that a lawyer in BCP can call up a lawyer in BC, check in, know what is going on, and make sure that our work is coordinated and moving in the same direction toward making markets more fair and competitive.
JANIS KESTENBAUM: I am going to move us back for a second to AI issues. BCP has been very active in using algorithmic disgorgement as a remedy in a number of consent orders involving AI, and I have to say businesses have certainly taken notice.
To start, because some of our readers—even though many businesses have taken notice, as I just said—may not be familiar with that term, my first question to you has two parts: Can you describe what algorithmic disgorgement is; and, second, is that remedy always appropriate in cases involving AI, and, if not, where is it appropriate and what is the limiting principle?
SAMUEL LEVINE: It is a great question. Certainly, we have brought now at least half a dozen—and I think quite a few more than that—cases where we have secured this remedy. What we call it in our orders is typically “data product deletion.” What that reflects is that if companies are collecting information illegally—for example, we secured this relief in our cases against Kurbo and Weight Watchers, where we alleged that the companies were collecting information from kids in violation of COPPA for many years—we required the companies to delete any models, algorithms, or work product derived from the data they collected illegally.
Why did we do that? Well, if you are a company and you are collecting data illegally from people for years and the FTC comes along and says, “Hey, that was illegal, delete your data,” does that change the ex ante incentives? It is certainly good for protecting people going forward, but what that means potentially is that that company was able to derive enormous benefits from illegally collecting people’s data in terms of their ability to train their models and make their models more effective. At the end of the day they have to stop, but they have reaped the benefits of it.
Again, if you think about ex ante incentives, which I think all enforcers need to be thinking about, that is not the right set of incentives. We want companies to be following the law when they are collecting information from people, not breaking it, knowing that it is going to be profitable at the end of the day. Data product deletion is designed to address that. I think it is of a piece with our data deletion requirements, and we typically seek both in cases where we seek data product deletion.
You asked when do we seek it and is it appropriate in every case. No, it is not appropriate in every case. We are looking at this case by case. There are cases where—again, it is hard to generalize this—data has been collected illegally and models are being trained on it with that data, where we think deletion is an appropriate remedy. There may be other cases with other facts that make that a lot more complicated, and you see in our cases sometimes different versions of the deletion remedy based on the specific facts.
Again, this is not something that we are seeking everywhere, but we do think it is appropriate in certain cases, not only to protect the public going forward but also to change the ex ante incentives around illegal data collection. I have to say that, as we now see companies race to hoover up data to train their AI models; because of the fact that we have been securing this remedy for years now, I am glad to hear that companies are noticing.
JANIS KESTENBAUM: Moving on to “dark patterns,” which have also gotten a lot of attention by the FTC, how is this a distinct category of deceptive or unfair practices in your view?
SAMUEL LEVINE: There is a lot of commentary on the use of dark patterns and the FTC’s I think really successful work to crack down on these.
Let me start by saying this, and this is not the first time I have said it: Dark patterns as such are not illegal, There is no ban on dark patterns in the FTC Act. Dark patterns are illegal if they deceive people because that is deceptive under the FTC Act. They are illegal if they cause injury to people that is unavoidable because that is unfair under the FTC Act. They are illegal if they make it impossible for people to cancel online subscriptions because that violates ROSCA, another law we enforce. And there are other laws too where dark patterns come into play.
What makes dark patterns unique is not that there is some distinct legal category. It is that they are using sophisticated design techniques, user interface testing and A/B testing, to manipulate people. Sometimes we see this in old-fashioned ways: Companies will make a false or misleading claim in bold and then will have some disclaimer in light gray at the bottom. There you might see that the dark pattern is effectively deceptive, and we have brought cases alleging that.
Other times companies are not actually making any claims, but they are using design elements to, for example, frustrate cancellation. That has been a big theme in our litigations against Amazon and Adobe and our $100 million settlement with Vonage in 2022, so we are not looking only at deceptive claims but also design practices.
What we have seen in a number of cases is that the FTC did not make up the term “dark patterns.” First, it was coined by Harry Brignull, but we are seeing it used by marketers as well. We have seen in our cases and investigations that UX designers warn their superiors at companies, “These patterns are dark, these interfaces are dark, these interfaces are tricking people,” and often they are overruled. They say, “We have to clean it up, we have to make it easier to cancel,” and they are overruled by other managers in the company who say, “No, this particular pattern is optimizing conversions, so we are going to keep it as is.”
The concept that design elements and user experience testing is somehow something that the FTC made up is absolutely false. We are seeing at very high levels in corporate America decisions being made around how user experiences should be designed, sometimes to help people but sometimes to hurt people, and when we see that in a way that is deceptive or unfair, we are taking action.
To offer a few notes on this quickly, we issued a major report on dark patterns, “Bringing Dark Patterns to Light,” I think in the fall of 2022. It has a wonderful taxonomy of the types of dark patterns we see in the wild. We brought our first case explicitly describing the use of dark patterns, Credit Karma, I think a couple of weeks after that. That was also the first case where we actually returned money back to consumers whose time was wasted, which is another consequence you see from dark patterns.
In the years since we have brought major cases against Publishers Clearing House, Amazon, Vonage, Adobe, and others. So, this has been an active enforcement area for us, and I think it reflects what we are seeing online, which is a lot of manipulative design patterns leading to lot of harm for a lot of consumers.
AARON BURSTEIN: Let me shift gears and look beyond the United States. International enforcement cooperation is an important part of the FTC’s consumer protection work. What issues are you collaborating on with foreign enforcement officials? A second part of the question is: how do differences between U.S. and foreign consumer protection and privacy legal frameworks affect how you approach that work?
SAMUEL LEVINE: I think we are trying to learn from each other. I have visited counterparts in Brussels, London, and elsewhere and have had conversations about social media harm to teens, dark patterns, and online privacy issues. We are also dealing with a common set of challenges undoubtedly, but we are also dealing with very different legal cultures, very different structures often of agencies across the world, and I think it is instructive for us to see what is working and what is not working overseas and vice versa.
For example, on AI—and I would say the same for dark patterns—I have heard from other enforcers internationally how impressed they have been with the FTC’s ability to move quickly to bring cases, to issue guidance, and to make clear how our existing set of tools—UDAP, a tool we got in 1938—still has relevance and is in fact more relevant than ever in confronting emerging technology.
I think what you see in Europe is that they tend often to move in a regulatory direction—you see the AI Act, you see the General Data Protection Regulation. Sometimes that takes a little longer, but sometimes it allows them to take on bigger problems in a bigger way. So, there are tradeoffs there.
Obviously, we are looking at Europe and seeing what is working and what is not, and I think they are looking at our unique strengths, especially the adaptability of our tools, our nimbleness, and our ability to go into court pretty quickly to protect the public. We are learning from each other, we are having conversations, and I think we share the goal of making online commerce safer, giving more people privacy, and rooting out dark patterns and false claims. We have common goals across agencies across the world, and we are watching each other and seeing what is working and what is not.
AARON BURSTEIN: To wrap up, we know that under your tenure and Chair Khan’s leadership there have been a lot of changes in how the FTC approaches consumer protection issues both procedurally and substantively. Given that none of us knows what will happen with the upcoming election, which of those changes that you have helped to implement do you think will endure regardless of who leads the FTC?
SAMUEL LEVINE: I think the changes we have implemented are going to endure—not all of them; we don’t know what is going to happen after November—but there is always a lot of attention understandably on disagreements among Commissioners, dissents will fly and statements will fly back and forth, and that will get a lot of attention.
But we have consensus on the vast majority of cases, certainly on the consumer protection side; and it is not just traditional cases where we have consensus, but some of the most important cases we have brought over the last few years have generated bipartisan consensus. Kochava is one of them.
I mentioned NGL, I mentioned the unfair marketing to teens, but that case also named individuals. Individual liability has been a big debate over the last few years. I said at the top of the interview why it is a big priority for me. We have Commission consensus that the individuals named in NGL were appropriately named.
I mentioned our use of unfairness and securing monetary relief in privacy cases like BetterHelp. We are getting bipartisan support for so much of the work we are doing, and I think particularly when it comes to emerging technology, when it comes to social media harms to teens, and when it comes to unchecked tracking of our geolocation data, these are not Republican or Democratic issues. These are issues where leaders and people across the political spectrum are saying that the government needs to do more, the government needs to do better, we need stronger protections in place for people’s privacy, people’s mental health, and people who are online.
So, I am confident that, whatever happens in November, the proactive approach we are taking to consumer protection, especially digital consumer protection, is going to endure. I think you are seeing already you have five Commissioners who have very different views—and I don’t just mean Republicans or Democrats, but even among the Democrats and among the Republicans—yet we are still seeing consensus on so much of our work, and that is really gratifying to see.
AARON BURSTEIN: Thank you so much.
SAMUEL LEVINE: Thank you.