chevron-down Created with Sketch Beta.

ARTICLE

Game(show) Theory: Lessons from TV Game Shows for the Pragmatic Litigator

Steven Schulwolf

Summary

  • Humans are hard-wired to judge the probability of random events as equal.
  • Heuristics often lead to people refusing to accept realities inconsistent with their assumptions. 
  • Unfortunately, this affects decisions about the timing of settlement discussions.
Game(show) Theory: Lessons from TV Game Shows for the Pragmatic Litigator
RyanJLane via Getty Images

I love brain teasers. On my website, you can play “Resolution Roulette” and then read about the Monty Hall Problem. The Monty Hall Problem confounds people by tapping into cognitive biases that steer them in the wrong direction. The Monty Hall Problem, and cognitive biases in general, are relevant to the practice of law because a key part of lawyering is valuing cases. When lawyers “read tea leaves” they focus on nuances in the law and the facts of their case, but in its purest form, valuing a case is an informed prediction of outcomes. Lawyers attempt to predict numerous factors: will the court grant summary judgment, will an expert be allowed to testify, how will key witnesses perform on the stand, how will the judge rule on motions in limine, or will the jury find in favor of my client? Some assumptions are conditionally dependent on the outcomes of others. The valuation of a case, like the Monty Hall Problem, involves an understanding of conditional probability, and requires the appreciation of all information and the causal interrelationship among various factors. Unfortunately, as Daniel Kahneman’s research demonstrates, humans are hard wired to take short-cuts in assessing these factors. And despite any stereotypes to the contrary, lawyers are still human. Even lawyers who painstakingly analyze the legal merits of their cases fall victim to cognitive biases, just like the rest of the population.

In presenting several CLEs, I have found that while most lawyers appreciate discussions about behavioral psychology, many express skepticism as to its relevance to the practice of law. This article will discuss the biases at play in the Monty Hall Problem. Building on those lessons it also addresses how biases affect another aspect of litigation: why cases do not settle earlier. Not to bury the lede, but biases often lead to cases being litigated longer than necessary.

Game(show) Theory

Most people are familiar with Let’s Make a Deal. Monty Hall shows a contestant three doors. Two of the doors conceal goats and the other a brand new car. The rules are simple. Monty asks you to choose a door. Then he reveals a goat behind one of the two remaining doors. Afterwards, Monty asks whether you would like to switch. Study after study shows that about 90 percent of people reject the offer and stick with their initial selection. However, switching doubles your chances of winning the car.

If your reaction is that there cannot be any advantage to switching, you are not alone. The Monty Hall Problem has confounded people for years. In 1990 when Marilyn vos Savant (who purportedly had one of the highest IQs in the world) noted in Parade Magazine that there was an advantage to switching doors, thousands of people – including prominent mathematicians – insisted that she was wrong.

Numerous books and articles have analyzed why the Monty Hall Problem is so difficult for people to grasp. The findings from these studies and the techniques employed by math teachers to explain cognitive illusions have practical applications for lawyers. As an initial matter, let’s try to understand why you should always switch. The Monty Hall problem presents a finite set of possibilities for the distribution of the two goats and one car, as follows:

Door #1

Door #2

Door #3

Goat Car Goat
Goat Goat Car
Car Goat Goat

 

After you select a door, Monty must reveal a goat. Most people process this by incorrectly assuming that their initial selection has a 50 percent chance of winning once there are only two doors left. But no new facts support the conclusion that the odds the initially selected door conceals the car have increased from 33 percent to 50 percent. We need to focus on whether we obtained new information about the other doors.

Let’s assume that the initially selected door concealed a goat. That means there is a car and a goat behind the other two doors. Because Monty cannot reveal the car, he has to reveal the goat and the car is behind the other door. Thus, if you initially selected a goat, you will always win by switching. In two out of three scenarios, when you switch, you will win the car. Switching doubles the likelihood of winning.

The Monty Hall Problem is a difficult math problem that masquerades as something simpler. Humans, including lawyers, take the bait and fall prey to the equiprobability bias. Most people quickly note that there are only two outcomes (a car or a goat) and there are only two remaining doors (the one initially chosen and the remaining door) and incorrectly conclude that the door they selected now has a 50 percent chance of hiding the car. The equiprobability bias (or the uniformity assumption) demonstrates that humans are hard-wired to judge the probability of random events as equal and make decisions using this heuristic. Interestingly, the older we get the more affected we are by this bias. A Belgian study of the Monty Hall Problem shows that primary school students (about 10 years old) are more likely to act rationally (i.e. switch) than secondary school students (about 15 years old) who are more likely to switch than university students (about 19 years old).   

One reason that we get worse at the Monty Hall game the older we get is that heuristics often lead to people refusing to accept realities inconsistent with their assumptions. Adding insult to injury, another study found that pigeons learned and adopted their behavior more quickly than humans when playing the game multiple times. The pigeons learn by experience, but humans are hard-wired to reject data inconsistent with their assumptions. We just “know” that switching cannot be helpful and rationalize results inconsistent with our beliefs to be bad luck. Ruma Falk writes that people “rarely display any shred of doubt when they instantaneously rely on the uniformity assumption, as if there is intrinsic certainty to that belief.”  

Numerous creative studies have been devised to help us understand why the problem is so difficult for humans. One study suggests that by employing the equiprobability bias people fail to understand the causal structure inherent in the game’s rules. Imagine you had a car and there were only two potential reasons the car would not start: no gas or a dead battery. The battery and gas tank are completely independent. Yet, if you know the car won’t start and that you have a full tank of gas, you have learned something about the battery. Burns and Weith refer to this as the Collider Principle and suggest that what we know about each door is causally related (albeit conditionally) on the information we learn from Monty after he reveals the door with a goat.  At its core, the Monty Hall Problem is difficult because technically it requires relatively complex math like Bayes Theorum (see note 6). Because most people—especially lawyers—panic when they look at math formulas, hoping a short-cut (i.e. the equiprobability bias) saves the day is a basic human instinct.

Multiple Biases at Play

If the Monty Hall Problem only showed that people were bad at math, it wouldn’t be very interesting. Sure, most people fail to appreciate that there is a mathematical benefit to switching, but that is only the tip of the iceberg. The vast majority of people who refuse to switch do so because they (incorrectly) believe that each of the remaining doors has a 50 percent chance of hiding the car. As we already noted, 90 percent of people stay with their initial door. But if people think it is just a coin-flip, why don’t more people switch? The Monty Hall problem is difficult because it taps into many cognitive illusions. Three biases further influence the decision to stay with the initially selected door: (1) status quo bias; (2) the endowment effect; and (3) confirmation bias.

People feel worse due to a bad result after they affirmatively take action than if they passively wind up with the identical bad result. Jason Rosenhouse provides the following example of the status quo bias: You have an investment account worth $10 of stock in Company A. You are thinking of selling that stock to buy stock in Company B. There are two scenarios: (1) in which you sell Stock A, buy Stock B and then Stock B becomes worthless and you lose $10; or (2) you never sold Stock A, but then company A goes bankrupt. Either way you lose $10, but studies show you are more upset if you affirmatively took the action of selling Stock A. These emotions help explain why most people are fine with keeping the door they initially selected. They know they will feel worse if they switch away from a winning selection.

The endowment effect describes the phenomena in which people demand more to give up something they already possess than they would be willing to pay to acquire the same item.  In the Monty Hall game, once people select a door, it is no longer “door number one”; it becomes “their door.” The endowment effect is powerful and it helps explain why if people believe that there is no benefit to switching, they won’t. Studies show that if someone else initially selects the door and a new person is offered the opportunity to switch, they do so at a much higher rate.  Why? They lack the same attachment to a choice made by the initial selector. It was never their door.

Finally, people want their choices to be vindicated. However, we must be careful of viewing new information through the prism of our initial assumptions. We have already demonstrated that a key to understanding the Monty Hall Problem is to focus on information learned about the other doors. This is difficult because people prefer information that proves that they were correct. Thus, confirmation bias focuses people on the “fact” that they survived the opening of one door and now only two doors remain even though they learned nothing new about their selected door.

Medical journals warn doctors about how cognitive biases can impair their judgment. For example, radiologists often unduly fixate on their first sight diagnosis despite inconsistent subsequent data. They also actively search for supportive data rather than an alternate explanation. In an effort to combat confirmation bias, radiologists are now advised to analyze images before reviewing notes from other physicians and to not form a conclusion until all data is analyzed.  Lawyers would be wise to adopt a similar approach. As discussed below, confirmation bias can influence how we process information obtained in discovery. 

Why Tom Petty Was Wrong About Discovery

Tom Petty advises that the waiting is the hardest part. With all due respect, waiting is easy. The status quo bias encourages waiting. Unfortunately, this affects decisions about the timing of settlement discussions because parties often refuse to negotiate until completing expensive discovery.

To demonstrate this, let’s play another game. Peter Wason created a study in which he shows people four cards, each bearing one of the following symbols: “E,” “K,” “4” and “7.” He informs the participants that each card displays a letter on one side and a number on the other side. He then asks people which cards need to be flipped over to determine whether the following statement is true: “[i]f a card has a vowel on one side, then it has an even number on the other side.”  So, what would you choose? Less than 5 percent of people answer correctly. Most people say E and 4, but the correct answer is E and 7. The 4 taps into our confirmation bias because we want it to support the statement, but, in reality, whatever is on the other side does nothing to disprove the statement. However, if a vowel is on the other side of the 7, the statement is untrue.

The Wason test potentially suffers from the same legal skepticism as the Monty Hall Problem: interesting, but what does that have to do with the practice of law? Thankfully, researchers converted the Wason study into something more legal. They asked lawyers to select the files necessary to demonstrate that “male managers never promote female employees to the position of software engineer.” Importantly, they were told that the judge admonished them to only select files that were absolutely necessary. They could select any combination of the following four files:

(A) the file of employee whose gender is unknown, who was recently promoted by a male supervisor;

(B) the file of employee whose gender is unknown, who was recently promoted by a female supervisor;

(C)  the file of male employee, who was recently promoted by a supervisor whose gender is unknown; or

(D)  the file of female employee, who was recently promoted by a supervisor whose gender is unknown.

The correct answer is A and D. 75 percent of lawyers answer incorrectly. Judges did even worse (only 14.2 percent were correct). So while lawyers did better than the pure Wason test, they still performed poorly. Selecting unnecessary files was a more common error than failing to select needed files. More than half of the lawyers selected file C. However, C (like 4 in the Wason test) is selected by people seeking to confirm their suspicions. Reading a file about a male employee does not prove or disprove the statement about the treatment of female employees. In other words, because of the confirmation bias, lawyers often insist on more discovery than they need.

The vast majority of cases settle. Litigation is expensive. Thus, in the abstract it is economically rational to settle early to avoid litigation fees. However, parties who settle early must be comfortable with making decisions in the absence of perfect information. Deciding when to stop the discovery process is difficult. Early settlement offers, even reasonable ones, are often viewed suspiciously (“what are they hiding?”). Typically, parties spend money to confirm that they cannot do better than the initial offer. Moreover, the adversarial process glorifies finding the “smoking gun” to force the other side to capitulate. But how often does that really happen?

One author’s study suggests that not only do lawyers insist on taking unnecessary discovery, once obtained they feel the need to rely on it, such that “waiting increases weighting.”  We are the victim of nonconsequentialist reasoning whereby we have difficulty assessing today how we will react to new information and once the new information is acquired, we overly rely on it.

Demonstrating this again involves converting more general studies into legal ones. For example, a famous study asked nurses whether they would be willing to donate a kidney. One group was immediately told they were eligible. A second group was tested for eligibility first, found out they were a match and then asked whether they would donate. Only 44 percent of the nurses from the first group agreed to donate, but 65 percent of the nurses from the second group decided to do so. After having decided to test for eligibility, they wanted their ultimate decision to rely on the new information.

A legal variation of the study attempts to show how this phenomenon affects the timing of settlement. It involves attorneys deciding whether to accept a settlement offer in a personal injury case. Attorneys were asked to assess a $400,000 settlement offer. Half of the lawyers were immediately told that there was a governmental report that implicated the defendant’s product. The other half were told that the report would come out in the future. Those attorneys were then given the option of responding to the settlement offer or waiting for the report. Most attorneys waited for the report and once they received it, they weighed its significance more after waiting. More lawyers, 66 percent, agreed to settle when they were aware of the report at the time the settlement offer was made. However, only 44 percent of the attorneys that waited for the government report wanted to settle. In other words, just like the nurses, many lawyers waited for what should have been irrelevant information and then once they received it, they relied on it in a way that altered their decisions. The additional discovery encouraged more litigation.

It would be ridiculous to argue that discovery is always counterproductive. However, these studies suggest that lawyers need to proactively attempt to quantify not only the costs of additional discovery, but the likelihood that such discovery will materially alter their valuation of the case. In many cases discovery is unlikely to move the needle materially. Those cases are good candidates for early settlement. Unfortunately, many times cognitive biases get in the way. If playing the Monty Hall game teaches lawyers anything, it is to always be wary whether biases influence their valuation of a case. 

*Footnotes in this article have been removed. Please contact the author directly for a copy of the article which includes the footnote references.

    Author