chevron-down Created with Sketch Beta.

Antitrust Magazine

Volume 35, Issue 2 | Spring 2021

Antitrust Cancel Culture: Do Economic Experts Really Cancel Each Other Out in Merger Litigation?

Brian E Rafkin and Blair Matthews

Summary

  • In over half of the 17 decisions studied, the courts relied heavily on experts, and in all but 2 cases relied on experts to some extent. 
  • Still, documents and fact witness testimony tended to matter more than the economic testimony. 
  • Examples of less successful approaches to economic testimony included economic models that did not comport with business realities or common sense; nitpicking of opponents' experts; and surveys.
Antitrust Cancel Culture: Do Economic Experts Really Cancel Each Other Out in Merger Litigation?
MirageC via Getty Images

Jump to:

Several prominent antitrust lawyers have observed that economic experts in litigated merger challenges tend to cancel each other out. Even the most recent Assistant Attorney General for the Department of Justice Antitrust Division adopted this view. Moreover, some judges who presided over merger trials have admitted that they found the economic testimony difficult to understand, which can be interpreted as further evidence that the experts cancel each other out.

In this article, we aim to test these observations against a systematic review of every court opinion issued in a government merger challenge since 2005 to determine whether economic experts do, in fact, tend to cancel each other out.

We draw two main conclusions from our review of these decisions. First, the concept that expert economists cancel each other out in merger trials is not borne out by the data. Instances in which these judges simply threw their hands up and disregarded the economic experts are the exception, not the rule. The reality is that most of these courts relied on economic testimony to support their conclusions. That is to say not that economic testimony is the most important evidence, but that it matters. Economic testimony is one piece of the puzzle that “takes its place along with the other evidence.”

Second, we conclude that any concern that economists will cancel each other out is irrelevant. Because one cannot know in advance how a judge will treat economic experts and because both the government and the merging parties inevitably will proffer economic expert testimony, approaching a case from the perspective that the experts will cancel each other out is rather pointless. Instead, it is more constructive for litigants to consider how they can most effectively use their economic experts. Thus, the second part of this article draws upon the body of reported merger decisions to identify strategies for presenting persuasive expert testimony.

Do Economic Experts Really Cancel Each Other Out? A Data Analysis

To test the hypothesis that “economic experts tend to cancel each other out,” we reviewed the district court decisions in all DOJ, FTC, and state attorney general merger challenges that have been litigated to decision in federal court since 2005. The sample set consisted of 18 cases, spanning from FTC v. Foster (Western Refining) (decided in May 2007) to FTC v. Peabody Energy (decided in September 2020).

Overview of the Analysis

We reviewed each written opinion to determine the following: (1) whether the court relied on economic expert testimony, (2) the extent to which the court relied on the economic experts, and (3) the extent to which the court discussed the economic experts in its opinion. The table below summarizes the results of our review.
 

Case

Date Decided

Prevailing Side

Relied on Expert?

Extent Relied On

Extent Discussed

FTC v. Peabody Energy

9/29/2020

Government

Yes

High

High

United States v. Sabre

4/7/2020

Defense

Yes

Medium

Medium

New York v. Deutsche Telekom

2/10/2020

Defense

No

Low

Low

FTC v. RAG-Stiftung

1/24/2020

Defense

Yes

Medium

Medium

FTC v. Wilh. Wilhelmsen Holding

10/1/2018

Government

Yes

High

High

FTC v. Tronox

9/12/2018

Government

Yes

Medium

High

United States v. Energy Solutions

7/13/2017

Government

Yes

Medium

Medium

United States v. Anthem

2/8/2017

Government

Yes

High

High

United States v. Aetna

1/23/2017

Government

Yes

High

High

FTC v. Staples

5/10/2016

Government

Yes

High

Medium

FTC v. Steris

9/24/2015

Defense

N/A

None

None

FTC v. Sysco

6/23/2015

Government

Yes

High

High

United States v. Bazaarvoice

1/8/2014

Government

Yes

High

High

United States v. H&R Block

11/10/2011

Government

Yes

Medium

High

FTC v. Lab. Corp. of America

2/22/2011

Defense

No

Low

Low

FTC v. CCC Holdings

3/18/2009

Government

Yes

Medium

High

FTC v. Whole Foods Market

8/16/2007

Defense

Yes

High

High

FTC v. Foster (Western Refining)

5/29/2007

Defense

Yes

High

High

 

  • Relied on Expert? This category identifies whether the court stated in its decision that it relied on an expert in deciding the case. We exclude FTC v. Steris from the below discussion because that case was limited to a single factual question regarding entry that did not call for economic analysis.
  • Extent Relied On. This category consists of “low,” “medium,” or “high” reliance designations. These distinctions gauge the extent to which economic analysis mattered to the court’s stated conclusions in its decision. This assessment includes not only cases where a party won because the court relied on its expert but also cases where a party lost because the court rejected its expert. An indication that reliance is low means that the economic analysis mattered little or that a court chose to rely on other evidence to reach its decision. The best example is New York v. Deutsche Telekom, in which the court rejected the economic experts because they “essentially cancel each other out” and instead favored fact witness testimony and documentary evidence. Conversely, an example of a high-­reliance case is FTC v. Whole Foods Market, in which the court leveraged expert testimony to draw conclusions about market definition, competitive effects, and entry. Medium-reliance cases are those in which courts relied on economic testimony directionally.
  • Extent Discussed. This category consists of “low,” “medium,” or “high” discussion designations. Deutsche Telekom is a good example of a low-discussion case, as is FTC v. Laboratory Corp. of America, in which the court mentioned the expert testimony only in passing, to support some factual assertions about market structure. United States v. Aetna is a good example of a high-discussion case. There, the court extensively discussed both the government and defense experts’ testimonies at each step of its reasoning, weighing both experts’ analyses of market definition, competitive effects, and entry. An example of a medium-­discussion case is United States v. Sabre, in which the court addressed the economic experts’ testimony but focused on the other evidence to draw conclusions.

Classifying the cases in this way yielded some interesting insights.

Economic Experts Rarely Cancel Each Other Out

In only 2 of the 17 cases analyzed did courts demonstrate a low reliance on economic experts. And in only one of those did a court actually throw up its hands and find that the experts canceled each other out. That case was Deutsche Telekom, in which the court found that the “conflicting” economic experts “cancel[ed] each other out as helpful evidence the Court could comfortably endorse as decidedly affirming one side rather than other.” The other case, Lab Corp., does not unambiguously support the cancel-each-other-out view. There, the court seemingly gave little weight to the economic testimony but did not provide a rationale for de-emphasizing it.

Economic Expert Testimony Matters

In the large majority (88 percent) of the decisions we evaluated, the courts relied on experts in their opinions to some extent. The most common category was high reliance, with over half (9 of 17) of the cases receiving that designation. Six more cases fell into the medium-reliance category. Only two were low-­reliance cases. To be clear, to say that the economic testimony matters is not to say that other types of evidence—documents and fact witness testimony—are not important. Only that “[n]o one analysis, no one item of evidence makes or breaks the case; it is the evidence and the economic analysis together from which an impression or image emerges—or does not emerge—and leads to an outcome.” For example, in FTC v. Sysco, the court relied on economic experts for the geographic market analysis and in calculating market concentration “[b]ecause there [were] no industry-recognized market shares,” but incorporated expert economic testimony as one factor in its product market and competitive effects analyses, in which documents and fact witness testimony also played an important role. The economic analysis certainly mattered in cases like CCC Holdings, in which the court rejected the government’s unilateral effects theory because its expert proffered flawed models, and Sabre, in which the court allowed the transaction to proceed in part because the government expert’s “explanation and defense” of the alleged product market “was simply unpersuasive.”

Courts Spill Substantial Ink on Economic Experts

Nearly every court in our data set—regardless of whether it relied on economic experts or not—spent a lot of time discussing the economic expert testimony. Eleven of the 17 cases (or 65 percent) were high-discussion cases and four more (or 24 percent) were medium-discussion cases. The Aetna court, for example, devoted more than eight pages to discussing econometric modeling alone. Only two of the cases analyzed were low discussion. The amount of ink spilled on economic experts certainly suggests that they influence the outcome.

The Government’s Case Often Rises or Falls with the Economic Expert. Numerous underlying factors contributed to the wins and losses, but the data show that the government almost always wins when a court relies extensively on economic experts. The government won 7 of 9 cases (or 78 percent) in which the court relied heavily on economic experts but only 4 of 9 cases (or 44 percent) in which the court did not heavily rely on them. This difference makes sense because the government has the burden of proof, but it also explains why both sides devote so many resources to economic experts.

But . . . Economic Experts Are Unlikely To Be the Most Important Source of Evidence

Notwithstanding the above conclusions, the cases indicate that documents and fact witness testimony tend to matter more than economic testimony. That is not to say that the economic experts cancel each other out, but that their testimony tends to play second fiddle to the other evidence. No court in our data set found that economic testimony trumps the other evidence nor did any court decide a merger case solely on the economics. Rather, these courts looked at all three types of evidence holistically, with economic testimony playing a greater or lesser role depending on the quality of the economic analyses, the key factual and legal issues to be decided, and the strength of the other evidence.

In sum, a systematic review of reported federal merger decisions does not support the notion that expert economists tend to cancel each other out. To the contrary, most courts carefully consider and rely on the economic testimony to draw conclusions about the likely effects of a proposed merger. Economic expert testimony matters.

What Can Litigants Learn from the Prior Cases To Make the Best Use of Economic Experts?

Our review of the reported merger decisions demonstrates that expert economists rarely cancel each other out. But, regardless, the more important question is what can litigants learn from this body of decisions to use their economic experts most effectively? After all, both sides inevitably will hire economic experts and invest significant time, money, and effort into the experts’ reports and trial testimony.

Economic Models Must Comport with the Real World

The clearest lesson from these reported decisions is that economic models must be more than theoretical. They must accurately describe and be consistent with the real world. In the decisions we studied, the judges favored experts whose models closely approximated the real world. These courts credited economic testimony that “is more consistent with how the industry actually operates,” that is “corroborated by other evidence in the record,” that is “reasonable given the nature of the . . . industry,” and that is “sensitiv[e] to market reality.” In United States v. Anthem, for example, the court closely reviewed the economic testimony, noting areas where it was consistent with or undercut by the documentary and testimonial evidence.

Courts have credited economic models that conform to “common sense” even when they are not underpinned by specific market facts. They also credited models that were directionally correct despite well-grounded criticisms. For example, in FTC v. Tronox, the court explained that the government expert’s “overall conclusions are more consistent with the business realities of the TiO2 industry than those proffered by [the defense expert], even if the . . . models are subject to valid criticisms.” Similarly, in United States v. Bazaarvoice, the court wrote that “[w]hile the data available . . . may not have been perfect, it sufficiently reflected the state of the market as shown by other evidence in this case.”

Conversely, several courts rejected economic experts’ models as untethered from reality that did “not begin with a reasonable specification of the underlying economics of the marketplace,” that ignored industry realities, that did not capture numerous aspects of the market, that relied on inaccurate assumptions, that were contradicted by real-world evidence, or that made assumptions that did not reflect how the products were sold in the real world. The Anthem court, for example, dug into the facts to test the defense expert’s position that customers would disaggregate their purchases in response to a price increase, explaining, “But even if this is sensible as a matter of economic theory, it ignores the practical impediments involved in slicing and cannot be reconciled with the persuasive testimony that the current trend in the industry is to avoid this kind of fragmentation.”

Courts also have rejected economic testimony based on faulty theoretical underpinnings. This includes testimony that was counter-intuitive, that predicted a present market state that did not exist, and that was inconsistent with “basic economics.” Moreover, several courts found that merely quantifying potential anticompetitive effects is not sufficient; the expert must first explain why those effects are likely to occur.

Thus, economic experts (in conjunction with counsel) must devote time to studying the documents and testimony so they can be prepared to describe the economic intuitions behind their analyses and explain why their conclusions are consistent with and underpinned by the real-world evidence.

The decisions also illustrate that a promising path to victory for defense counsel is to marshal real-world facts that undercut the government’s economic expert. Indeed, this is perhaps the most productive strategy for the defense to attack the government’s expert. Courts rejected the government expert’s testimony as not grounded in reality in four of the seven defense wins in our set of merger decisions.

Nitpicking the Opposing Expert’s Model Will Not Get You Far.

\Experts expend considerable effort attacking opposing experts’ models. But it is important that they focus on issues that will move the needle, as judges may not require perfection from economic models.

Criticisms that would not produce a different conclusion (even if accepted) have been ineffective. For example, in Peabody, the court rejected defense criticisms of the government’s diversion ratios because “[d]efendants never argued that a different set of margins would have led to a different outcome,” so choosing between them “would be an academic exercise.”

Likewise, criticisms that result in only minor changes have been ineffective. In Sysco, the court found that even when it accepted the defense’s criticisms of the government’s market share and HHI calculations, they “would still have a high combined local market share.” In Wilhelmsen, the court allowed for “some imprecision inherent in estimating revenue shares” when the government’s expert excluded one small supplier that failed to produce data.

Several courts accepted an opposing expert’s criticisms but nevertheless found a model persuasive for other reasons, such as when the model gave a rough—even if inexact—picture of the market or when all the economic evidence pointed in the same direction.

Bidding Data Analyses Have Not Performed as Well as One Would Expect.

In theory, a systematic analysis of bidding, win/loss, switching, or similar data should provide superior evidence of unilateral effects (or lack thereof) over anecdotal evidence of competition presented through documents and testimony. However, at least in the merger decisions we reviewed, the judges were not as receptive to these data analyses, and instead tended to favor the anecdotal documentary and testimonial evidence.

Several of the courts outright rejected bidding data analyses. In Anthem, both sides presented diversion analyses that relied on internal company bidding data. Seemingly frustrated that the data could generate conflicting results, the court instead focused on the merging parties’ ordinary course internal communications, which showed that “Anthem unquestionably competes directly and aggressively against Cigna for national accounts.” In CCC Holdings, the court rejected a bidding analysis in which the underlying dataset represented less than 5 percent of all bidding events that occurred during the previous four years. The court explained, “This fraction of auctions is not large enough to rely on as a representative sample of the entire insurance market.” Finally, in Deutsche Telekom, the court rejected a switching analysis proffered by the government because of concerns about the reliability of the data and because it was backward looking and did not shed light on “a merged company’s likely future behavior.”

Not every court we studied resisted bidding analyses; several relied heavily on them. In Bazaarvoice, the court relied on the government expert’s analyses of two datasets, a Salesforce.com database and data compiled from “how the deal was done” emails created by Bazaarvoice sales personnel, to determine that the transaction would “lead to substantially higher prices.” In Aetna, the court relied on the government expert’s use of switching data to conclude that the market for one insurance product was separate from the market for another insurance product. The court called this “the most persuasive evidence” supporting the government’s alleged product market. The court also relied on switching data in its unilateral effects analysis, finding that it “reveal[ed] close (and increasing) head-to-head competition between Aetna and Humana.” The court in FTC v. Staples likewise credited a bidding data analysis to support a conclusion that the proposed merger was anticompetitive.

In other cases, courts took a middle-of-the-road path, crediting bidding analyses as directionally consistent with the other evidence while also expressing concerns about the data.

These decisions show that bidding data analyses are only as good as the underlying data. Courts are more likely to credit analyses based on robust data (like in Aetna) and are more likely to reject analyses based on suspect data (like in CCC Holdings). Experts that intend to rely on bidding data should understand the ins and outs of the data, candidly acknowledge the flaws, and be prepared to explain why the data support their conclusions. Litigants attempting to discredit the opposing expert should understand exactly what data were used, how they were used, and how they may be flawed or used to produce inaccurate results.

The Survey Says “No.”

Surveys have a high failure rate in merger litigation. This may be because they have a large attack surface. The decisions we studied are littered with criticisms of the methodology (e.g., what questions were asked, how the questions were phrased, what response options were presented) and procedures (e.g., where respondents were surveyed, how many respondents were surveyed, how respondents were identified). This holds true both for surveys conducted in the ordinary course of business and those conducted for the litigation.

Surveys risk a house-of-cards scenario. Economic experts often use surveys (when they use them) as inputs, meaning that a flawed survey can bring down the entire analysis, as happened in H&R Block. There, the defense’s economic expert relied on two surveys—an ordinary-course pricing simulator survey and a defense-commissioned email survey—to measure diversion between different tax preparation alternatives, which in turn fed into the expert’s product market and competitive effects analyses. The pricing simulator survey was fatally flawed because it did not present prices for each of the tax preparation alternatives. The email survey likewise was flawed for a number of reasons, including that it “appears to ask a hypothetical question about switching, not diversion based solely on a price change.” As a result of the “severe shortcomings” in the underlying data, the court completely disregarded the defense expert’s testimony.

In another example, the government expert in CCC Holdings derived diversion ratios for his unilateral effects analysis from “a two-year old [ordinary-course] survey of thirty-one former CCC customers which notes that the results ‘cannot be projected to the population as a whole due to the limited number of completes.’” The unreliable survey evidence made the government expert’s diversion ratios unreliable, which, in turn, made his unilateral effects analysis unreliable and unpersuasive.

There are other examples of failed surveys. In the AT&T vertical merger case (which is outside the scope of the case review but is nevertheless instructive), the court rejected two flawed surveys, and in Whole Foods, the court gave no “weight or consideration” to a customer survey prepared for the defense by Kellyanne Conway (well before her time as an advisor in the Trump White House).

Bazaarvoice is one example of a successful survey. While the third-party survey data used had “deficiencies, to be sure,” the court credited it because the merging parties themselves relied on the survey data to inform ordinary-course business decisions and because it was consistent with the other economic analyses.

The Expert’s Analysis Is Only as Good as the Underlying Data or Documents

We have seen above that expert testimony can be undermined by flawed bidding and survey data. This is true for other data and documents that may contribute to economic models as well.

Experts, of course, rely on ordinary-course documents to support factual assertions in their reports, but they also can incorporate documents into their economic analyses. Such documents may include internal analyses of the proposed transaction, pricing analyses or models, business or strategic plans, emails documenting competition, customer surveys, or third-party consultant reports. As with data, experts’ analyses based on these business documents are only as good as the documents themselves. An expert who intends to rely on an ordinary-course business document as part of the economic analysis must understand the circumstances of the document’s preparation—who prepared it, why it was prepared, what information went into the document, what the was document used for, if the company relied on the document, whether it is in draft or final form, whether it is biased in any way.

Indeed, several courts have rejected expert analyses because they relied on flawed documents. In Western Refining, for example, the government’s expert based his merger simulation on a single ordinary-course pricing analysis document prepared by the seller. The expert’s entire merger simulation collapsed when the judge identified a litany of issues with the document: (1) it was only a first draft, (2) the drafters did not review any data or perform any backup calculations to create the numbers on the document, (3) the drafters spent less than a day working on the document, (4) the document “embodied an approach that was deemed unworkable and unfixable,” and (5) the company did not rely on the document or its calculations to make any business decisions “because the various numbers contained in the draft could not be validated.”

There are other examples. In AT&T (which, again, is out of scope but instructive), the government’s expert relied on a document prepared by a third party, which he called “the single best document and analysis” that he used. Yet, not only did the expert not know that the analysis was altered without explanation, the expert also “was entirely unaware of those changes when he ‘first relied on the document’ to perform his analysis.”

Successful Experts Simplify Difficult Economic Concepts

One area where experts can offer value to the court is to provide plain-English explanations of the economic intuitions that underpin their conclusions. This testimony allows the court to connect the underlying theory with the real-world evidence. It is particularly valuable because it cannot be proffered through fact witness testimony or the documents.

One successful method for providing this connection is the use of analogies. In RAG-Stiftung, the defense expert used the example of a “Fourdrinier paper machine,” which switches between producing two paper products “at the touch of a button,” to explain why supply-side substitution did not occur in the alleged hydrogen peroxide market. In another case, the expert analogized two industrial chemicals to hamburger buns and hot dog buns.

Non-econometric Data Analyses Can Bolster the Expert’s Testimony

Another way experts can simplify and strengthen their testimony is to provide more accessible data analyses in addition to hardcore econometric work like merger simulations, models, and regressions. In RAG-Stiftung, the court credited a number of the defense expert’s non-econometric data analyses, including analyses showing that prices had decreased each of the last three years, that there was wide variation in pricing across products that made coordination difficult, and that the two merging parties “largely sell hydrogen peroxide intended for different end uses.” Other courts have credited similar analyses.

Whereas econometric analyses can seem theoretical or complicated, these straightforward data analyses can appeal to courts because they simply quantify the data at hand.

Experts Can Use Their Opponents’ Data Against Them

Another approach that has proven effective in court is for one economic expert to take the opposing expert’s data and use it to support their own testimony. For example, in Peabody, the government’s expert used data from the defense expert’s own report to show that the price relationship between two products was “not as tight as Defendants have characterized it.” Experts have successfully employed this strategy in other cases as well.

In Sum: Economic Testimony Matters

The relative importance of economic evidence vis-à-vis documentary evidence and fact witness testimony will vary from case to case. But our analysis of 15 years of reported merger decisions demonstrates that economic expert testimony plays an important role and that economic experts do not tend to cancel each other out. With that in mind, the best path is for litigants to focus on making their economic expert testimony as persuasive as possible.

Mr. Rafkin and Ms. Kuykendall represented the seller PeroxyChem in FTC v. RAG Stiftung. Mr. Rafkin represented third-party trial witness DTE Energy in FTC v. Peabody Energy and his law firm, Dechert LLP, represented Whole Foods in FTC v. Whole Foods. The views expressed in this article are the authors’ own and do not reflect the views of their employers.

    Authors