Of Systems and Structures
George Loewenstein and Nick Chater are superb and exceptionally distinguished social scientists; they have made countless important contributions to behavioral science (and Loewenstein’s work in particular has had a real influence on policymakers). It is surprising but true that a lengthy article of theirs, which has attracted considerable attention, is a misfire. The misfire deserves attention not only because it is useful to correct misunderstandings and mistakes, but more fundamentally because an accurate accounting is of more general interest to those concerned with the actual operations of the modern administrative state. The value of an account of those operations, which remain poorly understood, goes well beyond any academic dispute. I engage Loewenstein and Chater with these points in mind.
Loewenstein and Chater think the most serious social problems, including climate change, are not hard to solve, but that powerful private groups have blocked the solutions by insisting that the right approach lies in individual responsibility. In their account, behavioral scientists working on policy‑related work have turned out to be corporate pawns or dupes. The reason is that they have focused on individuals (what they call “the i-frame”) rather than on the systems in which individuals operate (what they call “the s-frame”). Loewenstein and Chater believe that the effects of i-frame interventions have been modest and disappointing. They scold behavioral scientists for spending their time on i-frame interventions (where they unwittingly collaborate with powerful private groups), and they want them to shift their attention to system change.
Since behavioral scientists in or near the world of actual policymaking have spent so much of their time on system change, it is fair to wonder whether that advice is necessary. Whether we are speaking of climate change, consumer protection, rail safety, road safety, or public health, behavioral scientists working with or in the administrative state principally focus on system change. (It is a little like suggesting that professors of English literature should pay more attention to William Shakespeare.) Perhaps Loewenstein and Chater are best taken to be advising certain academic researchers, far removed from the world of policy, who have indeed devoted a great deal of effort to exploring how best to change individual behavior? Or perhaps they have been influenced by commentators and media outlets, at least some of which do seem excessively or unrealistically drawn to individual-level behavior change as a complete or near-complete response to major policy issues.
To come to terms with the relevant advice, we should distinguish between (1) targets and (2) tools. Policymakers might target individuals, companies, or governments (local, state, or national). With respect to tools, policymakers might use mandates, bans, taxes, subsidies, or nudges. We could easily produce a three-by-four table, with twelve boxes, each filled in by relevant initiatives. Loewenstein and Chater are unenthusiastic about nudges that target individuals—far less enthusiastic, I believe, than the evidence actually warrants. In any case, it is important to emphasize that targets and tools can be combined in diverse ways. As the example of the greenhouse gas inventory suggests, companies might be nudged by requiring forms of disclosure, which might result in system-wide change. Of course, mandates can also be imposed on individuals; consider mandatory seat belt usage, which has a plausible behavioral justification, as do, to take a far more controversial example, prohibitions on smoking. And we should not underrate the potential effects of i-frame interventions, which can do a great deal of good.
Various efforts to help consumers or investors, or to reduce risks to health and safety, may or may not be behaviorally informed. Those efforts may or may not be successful. Learning about individual behavior, and about what does or does not alter it, is exceedingly important. It might not be the best imaginable idea, and it is not particularly nice, to scold people who are working hard, and sometimes heroically, to understand what works and what does not—even if the relevant interventions are not likely to produce large-scale changes. It is important to underline an obvious point, which is that to know which reforms to favor, the question that must be asked is what produces the highest welfare gains. It is possible that s-frame interventions will produce net welfare losses (many of them do) and that i-frame interventions will produce net welfare gains (same parenthetical). The fact that an intervention involves systems, rather than individuals, is hardly a guarantee that it is a good idea.
The Nonexistent Crowd-Out Effect
Is there a crowd-out effect? If regulators focus on nudges, will they be less likely to consider mandates? Are i-frame interventions drawing attention and support away from s-frame changes? Some people seem to think so. For example, Loewenstein and Chater contend that “[b]ehavioral scientists’ excessive enthusiasm for i-frame interventions policy has reduced the impetus for systemic reform.” That is false. Behavioral scientists, working for, with, or adjacent to the government, have been enthusiastic about system reform, and their work on i-frame interventions has not crowded out system reform. Let us put this sentence in bold: If we were making a list of 100 reasons why a desirable system reform has not happened in an important area, such as climate change, the fact that some behavioral scientists have studied i-frame interventions could not possibly make that list.
Having worked in the U.S. government for a number of years, on scores of legislative proposals and well over 2000 regulations, I am unaware of any case in which i-frame interventions operated to deter or stop s-frame interventions, or to pull attention from them. (I have also worked with many other governments over the years, in Europe and elsewhere, and I am unaware of any such case in any nation.) To be sure, there might (must?) be some such cases (one? two? four?), but if anything, it would be more plausible to suggest that causation runs in the opposite direction: i-frame interventions alert policymakers (and others) to the existence of a problem, which spurs support for s-frame interventions. This “alerting” function of i-frame interventions, and their role in helping to shape and design effective s-frame interventions, are very important. For example, the increasing number of countries that have adopted nudges to discourage cigarette smoking at the individual level have also focused on more structural interventions, including restrictions on smoking in public places and cigarette taxes.
Lacking reliable evidence on behalf of their claim, Loewenstein and Chater point to unreliable non-evidence, such as the fact that the “brain represents stimuli of all kinds in only one way at a time,” and surveys finding that if you tell people about an i-frame intervention, you can reduce support for an s-frame intervention. Nothing follows from those experiments. Survey evidence of that kind hardly shows that the crowd-out effect is real or important—that in the actual world of policymaking (involving (a) legislation or (b) regulation, each of which has its own exceedingly complex processes and dynamics), fuel economy labels reduce support for fuel economy mandates or monetary incentives to buy electric vehicles, or that graphic warnings on cigarette packages reduce support for cigarette taxes or bans on smoking in public places. In government circles, those who favor, for example, fuel economy labels tend to also favor regulatory approaches designed to promote environmental goals. And if those who favor, for example, graphic warnings on cigarette pacakges but do not also favor a ban on cigarettes, it is not because of a crowd-out effect; it is a judgment on the merits.
It might be tempting to point to opportunity costs. If we find that regulators focus on, say, energy efficiency labels, we might find it obvious that they cannot focus on energy efficiency mandates (or carbon taxes). But the category of regulators is very large, and if some people focus on labels, it does not follow that the apparatus, taken as a whole, is restricted from focusing on mandates, or even that the attention to mandates is delayed or reduced. To be sure, it is not unreasonable to suggest that a department that explores labels will have less time for mandates. But the number of reasons for and against a focus on mandates is very large, and it is reckless to suggest that if a department explores labels, it will, for that reason, decline to explore mandates.
Loewenstein and Chater offer a set of arresting stories about corporate campaigns, in which companies have drawn attention to the importance of personal responsibility and supported behaviorial change at the individual level. But what lessons can be drawn from such stories? BP’s interest in carbon footprints may or may not be laudable, but would anyone argue that it is the reason the United States or the United Kingdom has not enacted carbon taxes, or what Loewenstein and Chater call “extensive regulation”? Is it plausible to think that if behavioral scientists had not supported anti‑littering campaigns, we would see more and stronger efforts to reduce plastic waste? The food industry may call for more exercise and healthier eating, not for prohibitions on the sale of fattening food. Is that amazing? It would be fanciful to suggest that some countries—Germany, Italy, France, Mexico—would have prohibited the sale of fattening food if only the food industry had not called for more exercise and healthier eating (and if behavioral scientists had not worked on how to encourage those desiderata).
If we want to offer an explanation of why some nation has not adopted a more aggressive approach to plastic waste, unhealthy eating, or climate change, surely, we can do better than to point to the fact that corporations have called attention to the importance of personal responsibility (or to academic research on nudges). One more time (but without italics, so as not to overdo it): If we were making a list of 100 reasons why system reform has not happened in some important area, such as climate change, the fact that some behavioral scientists have been enthusiastic about i-frame interventions would be unlikely to make the list. When the United States adopted an aggressive set of climate change initiatives in 2022, including subsidies of multiple kinds, behavioral science played a supportive role—and i-frame interventions were evidently not a problem.
In the end, Loewenstein and Chater offer a kind of conspiracy theory (and in this they are hardly alone). In their view, policy problems are not really hard because “tried-and-tested s-frame[] solutions are available.” The main obstacles, they think, are “the active and coordinated efforts to block s‑frame reform[s] by concentrated commercial interests who benefit from the status quo.” (Their citation for that proposition is a book by the journalist Jane Mayer, ominously called Dark Money: The Hidden History of the Billionaires Behind the Rise of the Radical Right.) In their account, the problem lies in the machinations of “powerful groups” who maintain their power partly by “promoting the perspective that these problems are solvable by, and the responsibility of, individuals.” Those powerful groups, consisting of or funded by billionaires, have enlisted behavioral scientists, who turn out to be pawns or dupes, unwittingly contributing to the failure to implement the obvious solutions. We can find softer versions of this argument in other places, largely on the political left, with the suggestion that choice-preserving approaches, such as nudges, are “technocratic,” or “top-down,” or “tweaks,” or a distraction or diversion for what is needed, which is systemic change.
No one should downplay the influence, in some times and places, of powerful private groups over the processes of Congress and the administrative state. (Often such groups oppose nudges and work exceedingly hard to prevent them from going into effect.) No one should deny that in important domains, systemic change is an excellent idea. (Some behavioral scientists have spent their careers insisting on that point and working long days on behalf of systemic change.) But let us not neglect two challenges: tradeoffs and reasonable disagreement. There are no simple solutions to the problems posed by climate change, obesity, retirement policy, healthcare, privacy, educational opportunity, and plastic waste. One reason is that each of those issues involves complex tradeoffs; another is that reasonable people disagree about the appropriate response.
The good news is that behavioral science can make, and is making, significant dents in each of those problems. Incidentally, one of the ways that it can do that is by targeting individual behavior, with the laudable goal of improving people’s lives, and behavioral scientists who seek to understand that behavior, and to improve such targeting, ought to be applauded rather than scolded. As some regulators like to say: Better is good.