chevron-down Created with Sketch Beta.

The Antitrust Source

The Antitrust Source | February 2025

Antitrust Injury & Damages in Algorithmic Collusion Cases: Another “New Frontier”?

Michael Kheyfets and David C Kully

Summary

  • Despite a rapidly growing academic literature on algorithmic pricing and collusion, there has been notably little work done to address issues of antitrust injury and damages.
  • In this article, the authors analyze the evolution of economic theories in algorithmic collusion litigation and describe a framework for the antitrust injury and damages issues that will be relevant as these cases proceed.
Antitrust Injury & Damages in Algorithmic Collusion Cases:  Another “New Frontier”?
Westend61 via Getty Images

Jump to:

Introduction

In 2011, two sellers of an esoteric textbook about flies each set their prices by using computer code to collect the other’s publicly posted price and multiply it by a specific number. This early example of “algorithmic pricing”—the practice of firms setting prices based on analyses performed by computer programs—led to the infamous price tag of $23,698,655.93 for the book “The Making of a Fly.”

About a decade ago, “algorithmic collusion”—the idea of reaching an anticompetitive agreement through algorithmic pricing—entered the collective consciousness of antitrust and competition practitioners when the Department of Justice (DOJ) filed a lawsuit accusing online framed art sellers of using algorithm-based software to fix prices. As the DOJ stated, it “will not tolerate anticompetitive conduct, whether it occurs in a smoke-filled room or over the Internet using complex pricing algorithms.”

Today, firms in many industries—such as “advertising, e-commerce, entertainment, insurance, sports, travel, and utilities”—rely on computer algorithms in one way or another when determining what prices to charge. As these practices have proliferated, the focus on how they may affect competition has intensified. In November 2023, the DOJ labeled algorithms “the new frontier” as a medium for facilitating firms’ collusive behavior. In the last several years, an increasing number of lawsuits alleging violation of antitrust laws through “algorithmic collusion” have been filed both by the DOJ and private plaintiffs.

Despite a rapidly growing academic literature on algorithmic pricing and collusion, and extensive litigation on the subject (including major litigation that has advanced past the motion to dismiss stage and into discovery), there has been notably little work done to address issues of antitrust injury and damages. In this article, we trace the evolution of economic theories in “algorithmic collusion” litigation and describe a framework for the antitrust injury and damages issues that will be relevant as these cases proceed.

Current State of “Algorithmic Collusion” Litigation

First Allegations of Collusion Through Adoption of Common Pricing Software: The RealPage Cases.

On October 15, 2022, ProPublica published an article that attributed increased apartment rental rates in the United States to the widespread adoption of apartment pricing software offered by a company called RealPage. Only three days after the ProPublica publication, private plaintiffs initiated the first of what ultimately became 34 separate class action lawsuits against RealPage and multi-family apartment owners that used its software. The multitude of cases were ultimately centralized for pre-trial proceedings in a multi-district litigation in federal court in the U.S. District Court for the Middle District of Tennessee.

Although some of the cases alleged markets only for student housing and others for multi-family apartment rentals (with varying geographic market allegations), the core allegations in these cases were essentially identical:

  • RealPage offers “revenue management” software that employs a pricing algorithm that recommends apartment rental rates to multi-family apartment owners that use RealPage’s software.
  • Defendant multi-family apartment owners conspired to inflate apartment rental rates by adopting RealPage’s software.
  • RealPage promoted its software to multi-family apartment owners in part by referring to other satisfied users of the software and the increased rents they achieved, and each apartment owner that adopted the software did so with knowledge that its competitors were also using the software.
  • RealPage’s pricing algorithm bases its pricing recommendations on non-public information supplied by competitor users of its software.
  • RealPage encourages users of its software to accept its pricing recommendations and users of its software almost always adopt the algorithm’s recommendations.

Plaintiffs framed their allegations around “hub-and-spoke” conspiracy caselaw, under which a “hub” entity aligned vertically with customers or suppliers had been found to have orchestrated a horizontal conspiracy among those customers or suppliers at the “rim” of the wheel. Under this caselaw, RealPage’s alleged outreach to multi-family apartment owners and promises of higher rents if they used RealPage’s software might (according to plaintiffs) constitute “an invitation to participate in a plan” that the apartment owners “gave their adherence to . . . and participated in.” Based on these cases, plaintiffs assert that RealPage facilitated formation of the conspiracy by broadcasting the involvement of other competitors, each equally committed to the scheme.

Expansion of RealPage Conspiracy Concepts to Other Contexts

After initiating cases against RealPage and multi-family apartment owner users of its pricing software, plaintiffs followed with further cases alleging hub-and-spoke conspiracies among users of pricing software in other contexts. Plaintiffs are currently pursing claims against providers and users of pricing algorithms in the following areas:

  • Casino hotels
  • Extended-stay hotels
  • Luxury hotels
  • Out-of-network health insurance providers
  • Automobile tires

Plaintiffs are also pursuing claims against RealPage competitor Yardi Systems, Inc. and multi-family apartment owner users of its revenue management software.

Potentially Emerging Legal Clarity

As these cases have progressed, some courts have now ruled on motions to dismiss the claims for (among other reasons) failing to allege the existence of a conspiracy among users of the “hub” pricing algorithm, bringing some possible clarity to the features of claims that may be more likely to survive motions to dismiss and allow plaintiffs to proceed to discovery. Motions remain pending in several cases, but each successive decision has relied on principles articulated in prior decisions and that likely will be the case as judges enter new rulings on motions before them.

On December 28, 2023, the judge overseeing the RealPage case denied motions to dismiss claims against multi-family apartment owners, rejecting their argument that plaintiffs alleged “at most, a hub and spoke conspiracy with no rim.” The “most persuasive evidence of horizontal agreement,” the court found, was the “simple undisputed fact that each [apartment owner] provided RealPage its proprietary commercial data, knowing that RealPage would require the same from its horizontal competitors and use all of that data to recommend rental prices to its competitors.” The court believed that individual partners would share their non-public information with RealPage for use in its pricing algorithm “if and only if [they] know they are receiving in return the benefit of their competitors’ data in pricing their own units.”

The RealPage court distinguished facts alleged against RealPage and multi-family apartment owners from allegations found insufficient to state a claim against Las Vegas hotel owners in Gibson v. MGM Resorts International. In Gibson, the court rejected plaintiffs’ alleged hub-and-spoke conspiracy because it was unclear based on plaintiffs’ allegations whether pricing recommendations provided to one hotel owner relied on confidential information submitted by other hotel users of the pricing software. The RealPage court, in contrast, found that plaintiffs alleged “unequivocally” that “RealPage’s revenue management software inputs a melting pot of confidential competitor information through its algorithm and spits out price recommendations based on that private competitor data.”

The Gibson court later considered amended claims against Las Vegas hotel owners but ultimately reached the same conclusion, dismissing plaintiffs’ amended complaint on May 8, 2024, with prejudice. The court described the plaintiffs’ case as “a relatively novel theory premised on algorithmic pricing going in search of factual allegations that could support it.” Among the factual allegations the court found lacking were—in contrast to the allegations in RealPage—any assertions that the hotels “share confidential information with each other by using” the pricing software. As the court found, “[t]here is nothing unreasonable about consulting public sources to determine how to price your product,” and that was all that plaintiffs’ factual allegations could support. The Gibson court also found, as an independent basis for dismissal, that plaintiffs “do not allege” that the Las Vegas hotels “are required to accept the pricing recommendations” provided by the algorithm. Even allegations that the hotels accept the recommendations “90% of the time” were insufficient when the hotel owners retain discretion to reject the pricing recommendations and set their prices at whatever level they choose.

The court overseeing claims against Atlantic City hotels for use of the same pricing software relied heavily on the Gibson decision and also dismissed plaintiffs’ claims. In spite of the “rather extraordinary lengths” the plaintiffs went to in their complaint “to dance around [the] allegation with linguistic equivocation in an obvious attempt to imply” that the algorithm’s pricing recommendations are based on competitively sensitive information provided by all users of the software,” the court found plaintiffs’ complaint “never unambiguously alleges as much.” As the court found, the complaint “does not allege that the [hotels’] proprietary data are pooled or otherwise commingled into a common dataset against which the algorithm runs.”

Most recently, the court overseeing the Duffy litigation against Yardi and multi-family apartment owners and developers that use its pricing software denied motions to dismiss plaintiffs’ claims. The decision again focused on allegations that users of Yardi’s software contributed competitively sensitive information for use by the pricing algorithm. The court observed that Yardi “advertised its revenue management software to lessors as a means of increasing rates” and that its software would work “as advertised . . . only if each lessor client divulges its confidential and commercially sensitive pricing, inventory, and market data” for use by Yardi in making pricing recommendations. The court rejected defendants’ arguments that adoption of Yardi’s software reflected only each apartment owner’s independent business decision, finding that Yardi had invited apartment owners to use its software based on the promise of higher prices, and that users “accepted [that] invitation to trade their commercially sensitive information for the ability to charge increased rental rates without fear of being undercut by their competitors.”

As additional courts confront similar allegations on pending motions to dismiss, the question of whether the plaintiffs allege the exchange of competitively sensitive information or use by the algorithm of pooled confidential information from all users in making pricing recommendations to a single user will likely be central to their analyses. Based on the Gibson decision, another important factor may be the degree to which recipients of pricing recommendations retain full discretion to reject recommendations and set prices as they see fit.

Government Involvement

The Antitrust Division of the DOJ has also engaged on issues concerning what it referred to as “the new frontier” in price fixing. Most notably, on August 23, 2024, the DOJ initiated its own lawsuit against RealPage, which the DOJ alleges is an “algorithmic intermediary” that “has built a business out of frustrating the natural forces of competition” and increasing apartment rental rates. As in the private class action cases against RealPage, the DOJ alleges specifically that the multi-family apartment owners that use RealPage’s software “submit on a daily basis their competitively sensitive information to RealPage,” which RealPage uses to provide “near real-time pricing ‘recommendations’ back to competing landlords . . . based on the sensitive information of their rivals.” The DOJ’s allegations diverge from those of the private plaintiffs, however, in alleging only anticompetitive information sharing rather than per se illegal price fixing. The DOJ did not explain its decision not to pursue price-fixing allegations, but alleges based on its pre-complaint investigation that multi-family apartment owners adopted RealPage’s pricing recommendations only 40 to 50 percent of the time, a frequency that supports the exercise of significant pricing discretion on the part of users of RealPage’s software and not users’ delegation of their pricing authority to RealPage’s algorithm.

The DOJ has also participated in the private class-action litigation by submitting three statements of interest urging the courts to interpret the plaintiffs’ conspiracy allegations expansively. In the RealPage class action litigation, the DOJ stated that plaintiffs properly allege concerted action subject to Section 1 of the Sherman Act when competitors “[accept], through conduct, . . . an invitation to act together,” which the DOJ asserted was a method of proving a conspiracy that was independent of asking a court to infer the existence of a conspiracy through allegations of parallel conduct and plus factors. The DOJ argued that plaintiffs in RealPage properly alleged the existence of an agreement among users of RealPage’s software through its allegations that RealPage invited concerted action by touting in marketing materials that its use of non-public information allowed users to “raise rents in concert,” and that users of the software accepted that invitation when they supplied competitively sensitive information to RealPage and “delegated aspects of decisionmaking on prices to RealPage.”

In Duffy, the DOJ asserted that defendants’ arguments that they did not reach an agreement with one another because they retain pricing discretion and frequently decline to adhere to Yardi’s recommended prices were “wrong on the law. . . . Although full adherence to a price-fixing scheme might render it more effective, . . . the violation is the agreement, and unsuccessful price-fixing agreements also are per se illegal.” The DOJ also asserted that, “just as competitors cannot agree to fix their final prices, competitors cannot agree to fix the starting point of their prices,” regardless of whether participants to the agreement ultimately deviate from the starting price.

Finally, the DOJ submitted a statement of interest in the Atlantic City hotels litigation, urging the court to reject defendants’ arguments that plaintiffs need to allege direct competitor-to-competitor communications to survive a motion to dismiss and, as in Duffy, that agreeing on the starting point of prices constitutes price fixing, even if participants ultimately deviate from the agreed prices.

The DOJ also recently submitted an amicus brief in support of plaintiffs’ appeal of the dismissal of their claims in Gibson, focusing principally on the ability of hotel users of the pricing software to reject the algorithm’s pricing recommendations and (as in its statements of interest) arguing that an agreement to fix starting prices is still price fixing, even if participants later deviate from the agreement.

Framework for Antitrust Injury and Damages

As the RealPage litigation (along with possibly other cases in which motions to dismiss are pending) advances past the pleading stages and toward parts of the case in which plaintiffs will need to present facts to support their claims that use of a third-party pricing algorithm by competitors inflates prices, challenges plaintiffs might face in showing harm to competition deserve more attention.

Determining whether antitrust injury occurred—and the damages associated with that injury—involves a comparison of the actual prices paid to the prices that would have been paid in a “but-for world” without the alleged conduct. However, it is not clear that frameworks applied in traditional price fixing cases to evaluate antitrust injury and estimate damages will necessarily apply, or even be feasible, when allegations of “algorithmic collusion” are involved. There are novel questions related to the but-for worlds—e.g., whether such a world entails no pricing via a third-party algorithm at all, or an amended algorithm that does not include certain elements that have been challenged as anticompetitive. There are also questions about the practical functionality of any particular algorithm and whether it would strictly increase prices—or whether some consumers may have actually been made better off by algorithmic price reductions.

Notably, the academic literature does not provide a consensus on, or even a consistent framework for, the appropriate way to develop a but-for world given a particular set of facts and allegations in a litigation involving claims of “algorithmic collusion.” For example, a paper by Assad et al. (2024)—one of only two empirical academic studies of algorithmic pricing—found that increased adoption of a pricing software by gasoline retailers in Germany resulted in increased profit margins in certain markets. A paper by Calder-Wang & Kim (2024), the other such empirical study, found evidence that “algorithmic pricing helps . . . set prices that are more responsive to market conditions”—i.e., that algorithms may allow firms to more quickly adjust prices both up and down based on the relevant supply and demand conditions.

These papers, however, sought to study the overall effects of pricing algorithm adoption, and did not seek to isolate any part of those effects that may have been caused by specific potentially anticompetitive features of the algorithm—which will likely be plaintiffs’ litigation burden. That is, the “but-for worlds” suggested by those papers are ones where no pricing algorithms are used at all, whereas the allegations in a particular litigation may require assessment of alternative versions of the existing algorithms at issue. Additionally, those papers focus on aggregate effects of algorithm adoption, whereas assessing antitrust injury and damages may require analysis of individual pricing decisions—and/or effects on individual consumers—particularly when users deviate from the algorithm’s recommendations.

As we discuss below, a threshold issue is whether studying the effects of “an algorithm” on prices is even the relevant question—or whether it would be necessary to identify specific elements that may be anticompetitive and test those (while leaving the rest of the algorithm intact). Additionally, it is not clear that recommendations from pricing algorithms would necessarily be higher than prices that would prevail in the but-for world—or that any price increase recommended by an algorithm would necessarily be anticompetitive. Finally, algorithm users’ ability to reject the recommendations they receive—and charge prices that are either above or below what was recommended—raises further questions about the prices that would have prevailed in the but-for world.

Economic Considerations For But-For Worlds in “Algorithmic Collusion” Litigation

What Is The Relevant But-For World?

An important threshold issue is that “the effects of an algorithm on prices” may not be the relevant question for assessing antitrust injury and damages. For example, the terms of the proposed settlement between DOJ and Cortland Management LLC (a landlord Defendant in the RealPage matter) involved a provision that Cortland would be “barred from [u]sing competitors’ competitively sensitive data to train or run any pricing model.” This suggests a relevant but-for world where third-party algorithmic pricing models still exist and are used, but are neither “trained” nor “run” on what is determined to be “competitors’ competitively sensitive data.”

The but-for world contemplated by the Cortland consent decree is complex because it suggests a need to assess historical pricing recommendations from a version of the algorithm that did not exist—and potentially would not have existed. In order to assess whether a consumer has suffered antitrust injury in this but-for world, it may be relevant to assess not what a manually-set price would have been, but what price the third-party algorithm would have recommended for that consumer’s transaction had it not incorporated “competitors’ competitively sensitive data”—but did incorporate other data (such as publicly available information and the algorithm user’s own internal data). Would that version of the algorithm have necessarily recommended lower prices in this but-for world?

Assessing this question would require an understanding of how the particular algorithm at issue works, how it incorporates various sources of data, and how it would perform if the sets of input data were changed. As a practical matter, it may be necessary to re-create the algorithm as it existed at the time of a particular transaction with a consumer, adjust the input data sets on which the algorithm relied, and assess what prices it would have recommended if only the adjusted data were used. However, a relevant consideration is that the algorithm itself—which is, ultimately, a set of computational rules for translating input data about supply and demand conditions into pricing recommendations—may have been written differently if the software developers knew the input data would be different.

In addition to the possibility that the pricing algorithm may have been developed differently in a but-for world where use of certain input data sets was not allowed, this issue could be further complicated by algorithms based on machine learning, which “gives computers the ability to learn without explicitly being programmed.” The recommendation-generating process (including how competitors’ competitively sensitive data may have been used) in these models could itself change over time as the model ingests more information and evolves. Thus, in order to determine whether the price in a particular transaction with a consumer was elevated by an allegedly anticompetitive practice, the relevant recommendation-generating and decision-making processes would need to be from the time of the transaction, not from the ex-post period of the litigation. Recreating earlier iterations of these processes, and applying them to alternative sets of inputs, after the algorithm has evolved may prove to be challenging.

It is also important to note that the issues with respect to the actual and but-for algorithms and data inputs would not necessarily be the same for all users of a particular pricing software. Just as competing firms may pursue distinct strategies when setting their prices manually, they may also do so when using a third-party pricing software. For example, by using a third-party algorithm strategically—e.g., by customizing the parameters of how the algorithm incorporates and analyzes various sources of data to generate recommendations, or by strategically overriding an algorithm’s recommendations—a particular user may alter how (if at all) its competitors’ sensitive data contributes to the recommendations the algorithm generates. As one meta-analysis of the academic literature put it, “using the same third-party algorithm does not mean using an identical pricing algorithm.” This means determining but-for prices may involve not only the need to understand algorithm mechanics but also each individual user’s strategy with respect to using the tool.

Are Algorithmic “Starting Point Price” Recommendations Necessarily Supracompetitive?

The DOJ’s argument in Duffy—that “agreeing” to use a third-party algorithm to generate “starting points” of prices constitutes price fixing, even if user firms ultimately deviate from those starting points—is generally consistent with private plaintiffs’ claims of per se illegal price fixing. However, the questions of antitrust injury and damages ultimately depend on what consumers actually paid and what they would have paid in a reliably constructed but-for world.

There is no consensus in the economic literature on even certain key theoretical points, such as whether third-party algorithms necessarily produce supracompetitive price recommendations on average (much less to all consumers). As one paper succinctly put it, “economic theory provides ambiguous and conflicting predictions about the association between algorithmic pricing and competition”—e.g., whether “collusive” outcomes (like elevated prices) from the use of algorithmic pricing strategies are likely (or even inevitable), simply in the range of possibilities, or actually unlikely. Thus, there would be no basis to start from the assumption that use of a third-party pricing algorithm—even if that algorithm was trained or run on what is determined to be “competitors’ competitively sensitive data”—necessarily generated price recommendations that were supracompetitive.

Additionally, the economic literature discusses pro-competitive benefits of pricing algorithms such as the ability to set prices that more quickly and precisely adapt to changing economic conditions. That is, an algorithm may change how supply and demand conditions affect prices, e.g., to more quickly and efficiently identify a need to lower prices when demand declines, or an opportunity to unilaterally increase prices as demand rises. This suggests that even higher prices may be consistent with enhanced economic efficiency, so a simple observation of higher “starting point” prices would not necessarily establish antitrust injury.

Assessing the but-for world may also require reconstructing the price-setting process, as well as the supply and demand relationships, that would have existed if the firm(s) at issue did not subscribe to the third-party algorithm (or used an alternative version of the algorithm). It would not necessarily be the case that all algorithm users’ but-for price-setting processes would be the same, or that they would all necessarily yield lower prices. For example, in a situation where manual pricing (or pricing based on a more poorly trained algorithm) is less responsive to economic conditions, those prices may decline more slowly when demand declines. In that situation, a better-trained algorithm would have reduced prices faster and made consumers better off.

If an algorithm facilitates a user firm’s unilateral goal of pricing more closely in line with underlying supply and demand conditions—by, e.g., minimizing the effect of error and bias involved in human decision making—then assessing antitrust injury and damages may also involve distinguishing the effects of unilateral closer alignment with the underlying economic drivers from the anticompetitive effects of the alleged “collusion.” Constructing a reliable but-for world may also involve determining what relationships would have existed between prices and supply and demand conditions. Without making this determination, it would be difficult to causally isolate the price effects attributable specifically to the alleged anticompetitive conduct from all the other factors affecting the algorithmic recommendation.

It is also important to be careful about analogizing “starting point prices” as recommended by pricing software to “list price collusion” theories often offered in price fixing cases. The idea behind “list price collusion” theories is that even if individual customers were able to negotiate different discounts off the list price, an agreement to elevate a list price from which the negotiations begin can nonetheless lead to elevation in final prices. However, it is not clear that a “list price elevation” theory reflects the relevant analysis of antitrust injury and damages in “algorithmic collusion” litigation. For example, to the extent an algorithm more efficiently reflects underlying supply and demand conditions, it could lower “starting point prices” relative to a world where pricing was done manually (or based on a more poorly trained algorithm)—not just increase them.

Algorithmic price recommendations are also not necessarily a starting point for negotiations with a buyer. As we discuss in more detail below, a seller may adjust an algorithmic price recommendation up or down before presenting that adjusted price to prospective buyers. For example, a firm may decide that the marketplace can support a higher price than the algorithm recommended—not by virtue of negotiation, but as a result of ad hoc assessment of supply and demand conditions. In this case, the algorithmic “starting point” would be below what the seller unilaterally believes it can charge—and the consumer would not have been presented with the algorithmic recommendation as a starting point for negotiation. Thus, an “agreement to adopt an algorithm” is not necessarily analogous to an “agreement to elevate a list price.”

The notion that how actual prices deviate from “list prices” or “starting point prices” requires rigorous analysis—and the issue that “list prices could not be used to measure antitrust impact on a basis common to the class”—dates back to Hydrogen Peroxide. When assessing what prices would have been paid in the but-for world, it is important to consider whether any particular algorithmic recommendation was adopted, changed, or overridden by the user. It is also important to assess how (if at all) a particular algorithmic recommendation affected actual prices in instances where it was not directly adopted.

How Does Users’ Behavior With Respect To Adopting or Rejecting Algorithmic Recommendations Affect Prices?

Collusive agreements are characterized by an incentive for each member to cheat, as “each firm would like to lower its price, increase its output and market share, and thereby increase its profits. But if each one did so, collusion would immediately dissolve into competition.” Thus, the success of a collusive agreement relies (in part) on participants having a way to detect and punish deviations from the agreement. It is not clear that “algorithmic collusion” necessarily provides such an ability, or that even “adopting” an algorithm’s recommendation would result in antitrust injury or damages.

As a practical matter, a firm using a third-party pricing software may have little (or no) visibility into what the software recommends to other users, whether those other users adopt or reject the recommendations, or whether a higher or lower price is offered in place of a rejected recommendation. It is also not necessarily the case that a firm would deviate from (or “cheat on”) an algorithmic recommendation by offering a lower price (as would be the case in a standard cartel framework). For example, the algorithm could recommend prices lower than what the seller believes the marketplace can sustain (e.g., because there is a real-life “demand shock” that the algorithm does not know about or has not accounted for). In this scenario, if the algorithmic recommendation is accepted, then the price offered to consumers is lower than the but-for price that would have been determined manually. If the recommendation is rejected, then the “deviation” could be to a higherthan-“starting point” price—the opposite of typical “cheating” under a collusive arrangement.

As we discussed above, the DOJ and private plaintiffs have argued that algorithm users address the issue of cheating by “delegat[ing] aspects of decisionmaking on prices” to the software. These arguments are often presented in tandem with “compliance rates”—statistics meant to suggest that the “delegation” occurs frequently and thus the mechanism for individual users to deviate from supracompetitive cartel prices is limited or removed. For example, the DOJ alleged that multi-family apartment owners adopted RealPage’s pricing recommendations 40 to 50 percent of the time, and Gibson plaintiffs alleged that Defendant hotels accepted the recommendations “90% of the time.”

The issue of compliance with an alleged “agreement to adopt an algorithm” is important because it is not clear how an algorithm would sustain supracompetitive price levels if its recommendations could be ignored and overridden. The academic literature has thus far provided little insight into approaches for assessing whether a particular algorithmic pricing recommendation (or a manually-set price charged by a firm that subscribes to a third-party algorithm) is at a supracompetitive level. One recent paper proposed a “test” to assess how, in general, adoption of a particular algorithm affects prices:

If a third-party developer is part of an agreement with competitors, it will then design the pricing algorithm differently than if there is no agreement. More specifically, if adoptions are coordinated then adopters’ prices will be increasing in the adoption rate (i.e., the fraction of firms who adopt) and, on average, adopters will price higher than non-adopters. In contrast, if firms’ adoption decisions are independent then adopters’ prices do not change with the adoption rate and, on average, adopters and non-adopters price the same.

Put differently, the test proposed by this paper suggests inferring a “collusive agreement” from a finding that average prices increased as more firms subscribed to a particular third-party algorithm.

The test described above has some parallels to “dummy variable” regression models that are often proposed in price fixing litigation to estimate overcharge damages. These models, when properly implemented, attempt to capture the price elevation (if any) on sales during a “conspiracy period,” as compared to sales during a “benchmark period”—accounting for other relevant economic factors. However, this test does not distinguish between a user “adopting an algorithm”—in the sense of a firm subscribing to a software product—from “adopting the particular pricing recommendation produced by the algorithm.” A firm may subscribe to a software but not use at least some of its recommendations. Determining which sales took place at algorithmically recommended prices and which were the result of manual overrides or processes (or, algorithmic processes that did not rely on competitors’ competitively sensitive data) is likely to be complex.

Consider a situation where Firm A adopts the price recommendation proposed to it by the third-party algorithm 90 percent of the time and Firm B adopts the recommendation proposed to it 50 percent of the time. Both firms have “adopted the algorithm,” in that they have subscribed to a particular software. However, even assuming for the sake of argument that every algorithmic recommendation is the result of a “collusive agreement,” the “agreement” in this hypothetical has been rejected (or “cheated on”) 10 percent of the time by one user and 50 percent of the time by the other. Moreover, calculating aggregate “compliance rates” may belie the complexity of determining which specific transactions fall into which category, if adoption and override decisions (much less what data contributed to generating the recommendation) are not recorded for each individual transaction.

Importantly, “adoption” of an algorithm-recommended price also does not necessarily mean that price is supracompetitive or that a user has “delegated” its ability to make unilateral pricing decisions. It is also not necessarily the case that the rate at which pricing recommendations produced by the algorithm are adopted is a meaningful metric for assessing whether a user’s incentive and ability to cheat on the alleged “agreement” was constrained. This could be the case, for example, in a situation where no sales actually took place at some of the “adopted” algorithmically recommended prices.

Consider a situation where an algorithm recommends a price of $100 for four straight days. Each day, the user adopts the recommendation but is unable to make a sale at that price. On the fifth day, the algorithm again recommends a price of $100 but the user overrides the recommendation and manually sets the price at $80. At this price, the algorithm user is able to find a customer and a sale is made. In this hypothetical, the user’s “adoption” rate is 80 percent if measured by prices posted, but zero if measured by actual sales. In addition to being set manually, the actual price at which the sale was ultimately made—a key input into the analysis of antitrust injury and damages—was below the algorithm’s recommendation.

This example suggests that an important part of assessing the but-for world relates to what is referred to in cartel theory as the “enforcement mechanism.” Software users having the option to reject the algorithm’s recommendation (with their competitors having no ability to “punish” or even observe this decision) appears to be contrary to the notion that the user’s incentive and ability to cheat on the alleged “agreement” was constrained—irrespective of the frequency with which the option to reject is exercised.

Conclusion

As clarity appears to be emerging in what allegations concerning adoption and use of pricing algorithms might state a claim for violation of Section 1 of the Sherman Act, attention in the multi-front class action litigation will now increasingly turn to the more difficult and as-yet unexplored questions about whether plaintiffs can prove that competitors’ adoption of a third-party pricing algorithm actually results in supracompetitive prices. To date, there are more questions than answers, and those questions may not be susceptible to easy answers.

    Authors