In his latest book, The Undoing Project, author Michael Lewis introduces us to the fathers of behavioral economics, Amos Tversky and Daniel Kahneman. Its first chapter describes how Houston Rockets General Manager Daryl Morey used behavioral economics to rebuild the team beginning in 2007. The key, Daryl Morey noticed, was that his recruiters and coaching staff invariably fell prey to specific errors in their decision making when selecting players. For example, recruiters who found a candidate they liked tended to overvalue information reinforcing their decision and ignore information suggesting that the player should not, in fact, be drafted. Eliminate these errors in judgment, replacing “judgment calls” with hard data, and they could pay less for better players. This was the same approach the Oakland A’s used to achieve success during their famous 2002 baseball season and the subject of Michael Lewis’ prior book, Moneyball.
What is behavioral economics, and how does it relate to the work we do as business lawyers? In short, behavioral economics is the science of how people make decisions. By understanding the techniques people use to make their decisions, including those that cause us to occasionally make bad decisions, we can accomplish two things. First, we can help other people make better decisions (or perhaps, instead, make the decisions we want them to make). Second, we can better understand our own decision making processes and, with a more concrete understanding, improve them. Understanding decision making can improve our performance in a large number of arenas, but certainly assists in performing such tasks as negotiating deals, structuring contracts, or building compliance systems.
Behavioral economics builds on the traditional economics concept of normative decision theory, which describes the rules by which a fully rational individual makes choices. Normative decision theory makes two basic assumptions. First, that the person making choices has complete information. In other words, the person knows all the information relevant to making the choice. Second, normative decision theory assumes that the person is rational—that is, capable of making choices that are logical and consistent based on that person’s desires. For example, if a person prefers coffee to tea, and also prefers hot chocolate to coffee, then a rational person will ask for a hot chocolate when offered a choice between that and tea (this is called the principal of transitivity). When you know the logical rules by which rational persons make decisions, the argument goes, you can build mathematical and logic models of their behavior, use those models to predict results, and also develop responsive strategies. These rules of logical choice, called utility theory, were described by mathematician John von Neumann and economist Oskar Morganstern in their 1944 work, Theory of Games and Economic Behavior, and form the basis for modern game theory.
The problem with utility theory is its limited application in real-world situations. People don’t have complete information when they make decisions, and as Tversky and Kahneman proved, people do not follow the rules of rational decision making when making choices. Instead, decision making employs a variety of cognitive short cuts, called heuristics. Behavioral scientists, through empirical studies, have identified dozens of these heuristics. They bear names such as “planning fallacy,” “anchoring,” “confirmation bias,” and “loss aversion,” but they essentially describe the rules by which human minds tend to make decisions in place of the strict logical constructs that utility theory describes. In short, behavioral economics provides a useful tool for predicting and understanding decisions where standard economics tends to fail. For example, anchoring refers to a tendency to determine subjective values based on recent exposures to something similar, although unrelated. When asked to guess the percentage of African countries in the UN, people consistently pick a higher number when exposed to the number 65 than when exposed to the number 10 just prior to guessing. The planning fallacy refers to the consistent tendency to underestimate the length of time a task will take, even when a person has extensive experience performing that task. Kahneman and Tversky’s work is significant—Kahneman was awarded the Nobel Prize in Economics for his work in the area.
Understanding these rules can play a significant role in negotiations. For example, an attorney who understands how anchoring works can employ the concept to set expectations both for his client and the opposite party that are reasonable and conducive to obtaining a negotiated solution. In addition, the attorney can be more aware of situations where anchoring might be affecting her own decision making, resulting in a potentially poor negotiation outcome. This article explores some of the more significant heuristics and how they affect negotiations.
Planning Fallacy
In Thinking, Fast and Slow, Daniel Kahneman describes the process of planning a book for a psychology course. When he polled the group of authors about how long they thought the project would take, they estimated about two years. Kahneman then asked the most experienced member of the group how long similar projects had taken in the past. After a little thought, the expert replied, “I cannot think of any group that finished in less than seven years,” and he said that about 40 percent of the projects had failed to reach completion altogether! Still, even though none of the authors were prepared to make a seven-year investment in a project with only a 60-percent chance of success, they went ahead designing the book. They finished it eight years later, and it was never used.
Closely related to the optimism heuristic, the “planning fallacy” refers to the tendency for people to consistently underestimate both the time and costs for completing projects. Although the most obvious examples come from large public works projects, any lawyer can think of the times that a lawsuit, negotiation, or business deal took longer than expected and cost more than estimated. Empirical studies show that the planning fallacy reflects an underlying psychological tendency to ignore historical evidence when estimating the time and expense for a project. In one study, students were asked to estimate the length of time needed to complete and submit their honors thesis. The average estimated time was 34 days. The average actual time was 55 days. Follow-up studies showed that formalized planning and thinking about the results of prior projects had little effect on the planning error. Studies show not only that the planning fallacy is pervasive across different activities, but that even experienced professionals fall prey to planning errors on a consistent basis.
In a negotiation, the planning fallacy can play a significant role in how each side evaluates its positions. In a litigation situation, both sides will likely underestimate not only the amount of time needed to reach a conclusion, but also the cost of the litigation process. This will make them less likely to settle, based on a mistaken belief about the costs of reaching a non-negotiated resolution. In short, parties elect to take on unanticipated risks based on their unrealistic belief in the potential results. They fail to settle when they should because of the planning fallacy.
In a deal situation, the planning fallacy has a different, but equally unfortunate, effect. Parties will underestimate the time needed to work through the negotiations or even the amount of time needed to negotiate and draft the details of the relevant documents. The unpredicted delay creates frustration as tasks a client or her counsel thought would take a couple of days or maybe a week to complete remain unfinished weeks later. In some cases, this frustration, resulting from the original unrealistic expectations, can cause a deal to blow up.
Daniel Kahneman suggests that the best way of avoiding the planning fallacy is to use a technique called “reference class forecasting.” Essentially, reference class forecasting entails a four-step process. First, identify a set of similar activities. When trying to predict how much a lawsuit might cost in legal fees, for example, identify a group of similar lawsuits. This group of similar, prior lawsuits is your reference class. Second, collect data on the reference class. How long did those lawsuits last from beginning to end? How much were the total legal fees? This data provides the baseline for evaluating your own situation. So, if your firm has handled 10 similar types of lawsuits in the past, and the average legal fees incurred were $100,000, then $100,000 is your baseline. Third, evaluate the effect of concrete differences between your particular case and the reference class cases. For example, if your firm’s hourly rates have increased year over year, you will want to adjust the baseline estimate upward to reflect the increases in hourly rates. If some of the prior cases required more witnesses than your case will, you might adjust your estimate downward.
Finally, the fourth, and possibly hardest, step is to actually use the estimate and ignore your inevitable desire to use your original “prediction” about the cost in place of the hard data. By using a data-driven, objective approach to forecasting, you can reduce the planning fallacy effect and make better decisions in negotiations.
Anchoring
Amos Tversky and Daniel Kahneman ran an experiment where college students spun a number wheel rigged to stop only on the numbers 10 and 65. After each student spun the wheel, he or she had to guess the percentage of African nations in the United Nations. Oddly, the number on which the number wheel landed had a profound effect on the guess. Students whose spin resulting in a 10 guessed, on average, that 25 percent of African nations were in the UN, whereas the students whose spin resulted in a 65 had an average guess of 45 percent. Kahneman and Tversky called this mental heuristic—the tendency for a recently experienced number to affect decision making—“anchoring.” Kahneman describes anchoring as “one of the most reliable and robust results of experimental psychology.”
Most lawyers are familiar with the concept, although they might tend to think about it in basic terms. Lawyers learn to begin negotiations with either a high number or a low number to set expectations about the final result. Anchoring does work in this context, but anchoring effects also operate in subtle ways that are harder to identify and more effective than one might think. First, the number used as an anchor does not have to be related to the number being anchored. In the African nations experiment, the number on the wheel was unrelated to the question of how many African nations are in the UN, but greatly influenced the students’ decision making. In another common experiment, subjects are asked to write down the last few digits of their Social Security number and then guess the number of marbles in a jar. Subjects with higher Social Security numbers invariably guess higher. Anchoring effects are hard to shake and operate even where the subject has independent information on which to make a reasoned decision. In one experiment, real estate agents were told the listing price of a property and then were asked to appraise it. Even when they had complete information about the property, their appraisals remained anchored to the listing price (including when the listing price was clearly implausible).
According to Kahneman, two different mechanisms cause anchoring: one that operates when we consciously think about the decision, and one that operates when we do not. When we are making conscious decisions about values (what Kahneman refers to as System 2 thinking), we tend to find an initial anchor for the value and adjust from that value. We also tend to under-adjust. As a result, the starting point supplied has a very real effect on the final result. In a negotiation, making the first offer—or even opening negotiations with a discussion that includes appropriately scaled numbers—can help set that anchor point and thus affect the final negotiation results.
Anchoring also affects unconscious decision making (what Kahneman refers to as System 1) through something called the “priming” effect. In this context, the anchoring number can create mental associations that inform the final decision making. Although the mechanism is different, the final effect remains similar.
In negotiations, taking advantage of the anchoring effect means acting quickly, perhaps by making an early offer designed to anchor the final results, or perhaps by opening negotiations with a discussion designed to expose the other party to higher or lower numbers generally. Anchoring doesn’t necessarily have to target the final result. You might seek to anchor the inputs to the other party’s decision making processes, such as their view of your client’s cost of capital, litigation costs, or other factors. Also consider the setting for negotiations. Conducting a meeting in a cheap coffee shop might create mental associations that help you negotiate a lower price, whereas meeting in an expensive restaurant might have the opposite effect. In any negotiation, anchoring efforts should occur early in the process, before the other party has an opportunity to anchor based on its own decision making processes or other experiences.
Confirmation Bias
In Predictably Irrational, psychologist Daniel Ariely describes an experiment where he asked MIT students to taste-test two types of beer. One is a regular beer, and the other is the same beer, but with some balsamic vinegar added. They called this “MIT Beer.” Predictably, when forewarned that MIT Beer contained vinegar, the students preferred the regular beer. When they were not forewarned about the secret ingredient, however, the students typically preferred MIT Beer. This and similar experiments demonstrated that peoples’ prior perception of something strongly affects their interpretation of future experiences. This heuristic is commonly referred to as “confirmation bias”—the idea that we tend to interpret new facts and experiences in ways that reinforce our pre-existing beliefs. When we expect a beer to taste odd because we are told in advance that it contains vinegar, we are more likely to dislike the flavor when we actually drink the beer.