chevron-down Created with Sketch Beta.

Antitrust Magazine

Volume 39, Issue 1 | Fall 2024

Can the Use of Pricing Algorithms Lead to Collusive Outcomes? Insights and Practical Approaches from the Economic Literature

Chunying Xie, Gabriella Monahova, and Kate Foreman

Summary

  •  Review of the economic research on how the use of pricing algorithms may lead to collusive outcomes or supra-competitive prices
  • Description and explanation of the mechanisms through which algorithms can set elevated prices and the assumptions underlying them
  • Practical advice based on this literature for investigators trying to determine whether pricing levels are indeed elevated and whether that is the result of tacit coordination between algorithms
Can the Use of Pricing Algorithms Lead to Collusive Outcomes? Insights and Practical Approaches from the Economic Literature
courtneyk via Getty Images

Jump to:

Pricing algorithms, especially those based on AI technology, have gained popularity among firms and attention from governments around the world. One concern for government authorities is that pricing algorithms can help achieve and sustain collusive prices more easily than humans can. For example, in a recent joint statement on generative AI models, the US, UK, and European competition authorities highlighted “the risk that algorithms can allow competitors to share competitively sensitive information, fix prices, or collude on other terms or business strategies in violation of our competition laws.” The debate on whether explicit, intentional collusion is more easily achieved or harder to detect when firms use pricing algorithms is ongoing. However, there is another question that enforcers have to grapple with: can pricing algorithms “collude” without being specifically instructed to do so?

In economics, the term “collusion” implies firms’ setting prices in a coordinated manner, often through a scheme that involves rewards for compliance and punishment for defection. Because collusive prices are often the result of maximizing the combined profits of the colluders—rather than their individual profits, as in a competitive situation—the resulting prices are supracompetitive. But supracompetitive prices can also be the result of market frictions such as high search costs. This fact complicates the assessment of coordinated behavior if the possibility exists that pricing algorithms can achieve supracompetitive prices without explicit collusive agreements.

Recent economic research on the potential for tacit algorithmic collusion demonstrates that algorithms can learn to sustain prices above competitive levels without being instructed to collude—even when specifically instructed to act competitively.

The Recent Economics Literature

A recent strand in the academic literature in economics asks the question: Can pricing algorithms reach and sustain collusion without human instruction to do so? This literature focuses on the theoretical possibility of algorithmic collusion without human intervention and shows that supra-competitive prices can be sustained by algorithms that have not been explicitly asked to set prices in a coordinated manner. As we explain below, this literature demonstrates the possibility that algorithms can reach and sustain prices above the competitive level without firms (or indeed the algorithms themselves) intending to set prices jointly or otherwise collude in the traditional sense.

In their 2020 article “Artificial Intelligence, Algorithmic Pricing, and Collusion,” Calvano et al. explore whether pricing algorithms can “autonomously” learn to collude without explicit communication. The authors’ baseline (and highly stylized) model assumes two firms facing symmetric cost and demand conditions and selling differentiated products. Under these conditions, the equilibrium prices that emerge under a competitive equilibrium, under collusion, and under monopoly in a single-period (i.e., non-repeated) interaction between the firms can be easily calculated. The authors then program algorithms that choose a price in each period of a repeated game and observe the resulting profits. After a learning period, the algorithms may “converge” to a stable price for each firm. The authors find that the algorithms consistently learn to charge prices higher than the competitive Nash equilibrium of the single-period game but “rarely as high as” the monopoly price.

To understand this concept of “convergence,” it is helpful to explain how the pricing algorithms decide what price to set in each period. Calvano and his co-authors assume that firms use Q-learning algorithms, which are popularly used by computer scientists, and which can learn optimal actions through trial and error. To simplify the process, Calvano et al. assume that in each period, a firm’s algorithm can choose one of a finite set of prices within a pre-determined interval, ranging from below the competitive price in a single-­period game to above the monopoly price. The authors also assume that the algorithms “remember” only the previous period’s prices of all suppliers.

In each period, before setting a price, each algorithm can choose either to “exploit” or to “explore.” If it exploits, the algorithm chooses the price that gives the highest profit in the current period. If it instead chooses to explore, the algorithm sets a random price within the pre-determined range. In each period, the algorithm either exploits or explores based on a pre-determined probability. The authors assume that initially, the algorithm explores more, but as it learns, it increasingly exploits the best-known strategies. Intuitively, this assumption means that the algorithm initially gathers information about the market environment by trying various prices to learn the “optimal” pricing strategies. In the later periods, as the algorithm has learned about the market environment and the pricing strategies that increase its profit, it is more likely to exploit those strategies. The declining rate of exploration also means that the algorithms may ultimately converge to stable behavior.

There are two important takeaways regarding this convergence process. First, due to the complexity of the simulation and the stochastic (that is, randomly determined) nature of its elements, convergence is not always guaranteed. Second, even if the prices remain stable for an extended period, there remains the possibility of one-off price deviations as the algorithms continue to explore, though at a much lower rate. According to Calvano and his coauthors, the algorithms are considered to have converged when the optimal strategy for each player remains unchanged for 100,000 consecutive periods. Those two lessons can have significant implications when we analyze concerns of elevated prices caused by pricing algorithms in practice. For example, there may be a need to differentiate whether price elevations are caused by “exploration” as algorithms try to learn market conditions or by “exploitation” as algorithms have discovered the optimal pricing strategy. It is also of practical importance that while nearly all the simulation sessions the authors conducted converged, it took a long time to do so—anywhere between 400,000 and several million periods. To put this in context, if the algorithms update prices every minute, 400,000 periods is equivalent to 278 days to convergence. This poses the question whether such “convergence” can be practically achieved in the real world or is only a theoretical possibility.

The Calvano group finds that in the baseline model, upon convergence, the algorithms often learn to set prices above the competitive benchmark. The authors explore various market conditions to assess how the result may differ. For example, they find that increasing the number of firms from two to three or four reduces the likelihood of the algorithms’ achieving collusion but does not eliminate it. The authors also find that random entry and exit of an outsider firm significantly reduces the likelihood of collusion. This changing market condition also makes it harder for algorithms to find a stable strategy—i.e., to achieve convergence. As another example, the authors find that allowing uncertainty in demand could prolong the convergence process but would not eliminate the possibility of collusion.

Like the Calvano article, Timo Klein’s 2021 article “Autonomous Algorithmic Collusion: Q-learning under Sequential Pricing” explores whether Q-learning algorithms can autonomously learn to collude. Unlike Calvano, where the authors assume that algorithms simultaneously choose price, Klein (2021) assumes that algorithms interact in a sequential manner. That is, algorithms take turns to set prices and cannot change their pricing when it is not their turn. Consistent with the finding in Calvano, Klein finds that there is no guarantee that Q-learning algorithms would converge to a specific outcome, but when they do converge, the resulting price level can be above the competitive benchmark.

A third recent article, by Brown and MacKay, demonstrates that supra-competitive prices can arise even when firms—and their algorithms—have been instructed to act competitively, i.e., each algorithm chooses a price that maximizes the firm’s own profits. Using data on the prices of popular allergy drugs scraped from retailers’ websites, the authors document three facts: 1) sellers update prices at regular intervals, and these intervals differ across sellers, 2) sellers react to price changes by competitors in a manner consistent with the use of automated pricing algorithms, and 3) sellers with faster algorithms have lower prices than those with slower technology. Motivated by these empirical findings, the authors build a model in which sellers have asymmetric pricing algorithms, which allow some sellers to update prices faster than others in response to price changes by competitors. The use of pricing algorithms also leads to sellers being able to commit to a pricing strategy (e.g., undercut rival by $X when rival changes price), because the algorithms change prices more frequently than humans update the algorithms.

This commitment and the asymmetry between the pricing algorithms lead to prices that are higher than those that would be obtained in a competitive equilibrium. Specifically, the authors find prices that are 5% higher relative to the competitive level and lead to a 4% decrease in consumer surplus and close to a 10% increase in firm profits. These higher prices come about because the firm that is slower to update its prices knows that the faster firm can undercut it through its more frequent pricing, and so refrains from decreasing price as it would if pricing was not automated or if the firms’ technologies were equally fast. Firms with the most advanced pricing algorithms (in terms of frequency of price changes) and the largest market shares realize the biggest gains. The authors also expand the model to allow firms to choose their pricing technology, with a bigger investment obtaining an algorithm that updates prices faster. Interestingly, in equilibrium, firms choose asymmetric algorithms, which in turn lead to elevated prices. The authors note that “if policymakers are concerned that algorithms will raise prices, then the concern is broader than that of collusion.” They point out that to address this issue, policymakers would have to prohibit firms from instructing their algorithms to directly condition their pricing rules on rivals’ prices, which may be difficult or undesirable to implement. The authors propose that a similar result can be achieved by limiting the frequency or scope of scraping of rivals’ prices or the storage of those prices. But they note that these measures do not fit well under current antitrust and regulatory regimes, so further consideration is needed to come up with appropriate solutions.

It is worth noting that while the economics literature discussed above focuses on algorithms that directly set prices, there is another strand in the literature that studies algorithms that allow firms to better predict demand and evaluates whether and how the adoption of those algorithms affect the sustainability of collusion. For example, in a 2021 article by O’Connor and Wilson, the authors assume an environment in which firms face demand uncertainties (e.g., a rainy day may increase the demand for umbrellas, but umbrella sellers may not observe the weather forecast before they choose prices). The authors allow firms to adopt algorithms that would better predict demand—i.e., eliminating some but not all the uncertainties. In this setup, even after the adoption of algorithms, firms still choose their prices to maximize their own profits. O’Connor and Wilson find that the adoption of such algorithms has two potential opposing effects on collusion. First, when firms have better knowledge of the underlying demand conditions, collusion may be easier to sustain because firms can better discern rivals’ cheating from changes in demand and because the economic payoff from collusion increases with reduced demand uncertainties. Second, the reduced demand uncertainty allows firms to better time their decisions to deviate from a collusive agreement, thereby making it harder to sustain collusion. O’Connor and Wilson find that, on net, the impact of the use of algorithms on the probability of collusive behavior depends on the specific market characteristics (e.g., whether there remains substantial demand uncertainty).

Practical Considerations for Assessing Autonomous Algorithmic Collusion

The academic literature discussed above demonstrates that pricing algorithms can sustain supra-competitive prices, but not that they necessarily or always do so. What are the practical implications of this possibility when investigators are concerned with detecting supra-competitive pricing? If there is a concern that prices in a given industry are above competitive levels and that firms in the industry are believed to be using automated pricing, investigators would need to determine whether prices are indeed elevated, and if so, whether this is due to firms colluding intentionally, or their pricing algorithms “accidentally” converging on such an outcome, or some other market friction.

To start, investigators would need to assess whether prices generated by algorithms are indeed supra-competitive. Using lessons from traditional cartel cases and the economic literature, we can determine whether the market(s) in question exhibit characteristics that are more likely to lead to collusion. These include markets with stable supply and demand conditions, higher levels of concentration, existence of barriers to entry, symmetric firms, cross-­ownership and other links between competitors, lack of buyer power, multi-­market contacts, and use of common data. In addition, we can use the learnings from the recent academic literature reviewed above that focus specifically on collusion between pricing algorithms. Following Calvano, we can look at the frequency with which prices are updated by the algorithms in question and determine whether it is possible or likely that the algorithms would have had time to converge to a stable price. Similarly, following Brown and MacKay, we can examine whether the pricing algorithms update prices with a similar or different frequency, as asymmetric algorithms are what drives the ability to achieve elevated prices in their model.

If these characteristics suggest a higher risk of collusion, investigators can turn to examining the pricing algorithms involved. This is often referred to as a technical audit. Following Calvano, the algorithms may be used to simulate prices to see whether they would reach “convergence” and how long it would take. They may also be adjusted to specifically instruct them to collude, i.e., maximize joint rather than own profits, and compare the simulated prices to the observed pricing data. Using the insights from Brown and MacKay, investigators can examine whether the pricing algorithms are programmed to react to changes of prices by competitors and whether they do so with different frequencies.

In addition, the observed pricing data can be used for traditional analyses of collusion, which involve an examination of the prices and margins to attempt to determine whether they are higher than competitive levels. Particular difficulty arises when investigators do not know which firms have adopted algorithms and when, so that it is unclear how to establish the necessary competitive benchmarks. One approach to tackling this issue is to examine the price data for patterns that may indicate pricing by algorithms and patterns that may suggest collusion. Assad and his coauthors use this approach to analyze rich data on prices and product characteristics for every retail gas station in Germany. The authors know that pricing algorithms became widely available in 2017 but do not observe the decision to adopt for each particular gas station. To determine which gas stations were most likely to have adopted algorithmic pricing and when, the authors conduct tests on the number of price changes made in a day, the size of those price changes, and how fast price updates happen in response to a change by a rival. Similarly, in a 2023 article, Hanspach, Sapi, and Wieting collect data on prices and product characteristics from the largest online retailer in Belgium and the Netherlands and they define sellers to be those with a very high number of price changes over a particular period. They note that another indication of algorithmic pricing could be when prices correlate with other benchmarks, such as the lowest price in the market, the second-lowest price, or other sellers’ prices. Similar “structural breaks” testing can be performed when investigating possible algorithmic collusion in other markets. It is worth noting that such approaches may necessitate obtaining and analyzing very large volumes of data, which may be impractical in certain investigations.

Economists analyzing claims of collusion commonly focus on causation, which involves accurately linking observed price increases to alleged anticompetitive behavior. This focus is essential for determining whether price changes stem from collusion or other factors. This consideration also applies to pricing algorithms, which can be affected by various market frictions, such as transaction costs. These frictions can lead to higher prices independently of collusion. Therefore, it is crucial to separate the impacts of these frictions from the effects of the algorithms themselves or their specific applications.

One key question about causation arises: What is the “but-for” world, and what would competitive prices look like in the but-for world? One possibility is a world without any pricing algorithms, where the analysis would explore how prices would behave without algorithmic influences. Assad uses the prices before the adoption of pricing algorithms as the competitive benchmarks and thus implicitly assumes that the but-for world is the scenario absent any pricing algorithm. Another possible “but-for” world could involve altering certain features of the pricing algorithm and examining how these changes might affect pricing outcomes. In this case, the simulation tools developed and used in the recent academic studies discussed in the overview section above (e.g., Calvano, Brown and MacKay) could be helpful in assessing how changes in pricing algorithms affect pricing outcomes.

Conclusion

Recent economic research finds that pricing algorithms can theoretically sustain supracompetitive prices under some circumstances. However, this nascent strand of the literature does not determine conclusively that the use of algorithms always leads to elevated prices, so determining whether prices are indeed higher than they should be (based on a competitive benchmark) still requires a case-by-case analysis. Economic indicators and tools used in traditional cartel cases may still be applicable in studying potential collusion by pricing algorithms. Beyond those standard approaches, factors specific to pricing algorithms—e.g., the frequency at which algorithms update prices—may inform the likelihood of algorithms’ achieving supracompetitive prices. A proper analysis may need to consider what the but-for world is—e.g., no pricing algorithms or a different pricing algorithm. Lastly, the economic literature finds that algorithms that do not directly set prices but better predict demand conditions can have mixed effects on the likelihood of coordinated firm behavior.

    Authors