Third-party facilitation of collusion
In recent years the importance of third-parties for the facilitation of collusion has received more attention. While most antitrust activity and economic analysis focuses on agreements involving only the actual members of the cartel, economists have also considered the idea that an outside party can facilitate collusion in certain contexts by aiding the cartel to overcome the challenges associated with coordination and information sharing. The focus has mostly been on the role of trade associations. For example, a 2020 empirical study found evidence suggesting that a trade association of physicians in Chile engaged in facilitating practices that significantly raised prices charged by affiliated physicians. Earlier work draws on historical records and characterizes a repeated interaction between a medieval city and its merchants, describing how city authorities helped previously competitive merchants organize into monopolistic guilds.
Some recent cartel cases have also involved hub-and-spoke arrangements, operating along the supply chain and featuring firms at one end helping firms at the other collude. The Department of Justice successfully prosecuted a hub-and-spoke case against Apple for raising ebook prices and exclusionary conduct (as a result of a most favored nation clause). Although there have been few such cases, the earliest US case involving hub-and-spoke collusion was Interstate Circuit v. United States, which dates back to 1939. In order to limit competition from subsequent-run movie theaters, an operator of first-run movies acted as the hub, and it coordinated the behavior of motion picture distributors.
Despite the occurrence of these cases of hub-and-spoke cartels, they are difficult to rationalize from a theoretical perspective, and detecting collusion along the supply chain is challenging empirically. Theoretical perspectives suggest that manufacturers are motivated to restrict market power within the retail sector, and conversely, retailers aim to limit manufacturers’ dominance to avoid issues like double marginalization and increased costs. Consequently, it remains unclear how this type of arrangement may impact prices and why this specific form of cartel can emerge instead of scenarios involving only suppliers or retailers colluding independently. From an empirical standpoint, the challenge for antitrust authorities lies in assessing the participation of firms across various supply chain levels in sustaining elevated markups.
Economists have only recently begun delving into the study of hub-and-spoke cartels. A number of explanations for such arrangements have been proposed. Sahuguet and Walckiers (2017) point out that when demand is volatile, profits within the vertical chain can potentially be boosted through the exchange of information between the supplier and retailers. Meanwhile, Van Cayseele and Miegielsen (2013), Giardino-Karlinger (2014), and Gilo and Yehezkel (2020) argue that incentives for maintaining a hub-and-spoke structure stem from rewards or the threat of refusal to supply imposed by the producer.
Clark et al. (2023) offer the first comprehensive analysis in the economics literature of an actual hub-and-spoke collusive arrangement. Their focus is the alleged cartel uncovered in Canada’s bread market. The allegations imply that collusion began at the end of 2001 and carried on for approximately 15 years. The authors find that hub-and-spoke collusion was effective in this case, raising prices significantly. They also present economic evidence suggesting that both suppliers and retailers were involved in the arrangement. Finally, they characterize a collusive arrangement in which suppliers facilitated retail-price coordination, while at the same time retailers facilitated supplier coordination. Clark et al. (2023) demonstrate that two features of many retail markets generate asymmetries that pose challenges for separate collusion by suppliers and retailers. Specifically, large retail stores stock the products of competing wholesalers using a main supplier/secondary suppliers allocation of shelf space, with the main supplier responsible for providing certain services to the retailer in exchange for greater shelf space. This feature, along with imperfect competition at the supplier level, generates asymmetries that pose challenges for separate collusion by suppliers and retailers. Clark et al. (2023) identify three vertical spillovers between retail and wholesale segments of the supply chain that negatively impact the stability of a horizontal cartel in either segment. First, the disparity in retail shelf space allocation between the primary and secondary suppliers hampers independent collusion by the suppliers as the secondary supplier is incentivized to deviate significantly. Second, the authors illustrate that manufacturers respond to retailer-only collusion by competing for shelf-space in such a way as to augment the dispersion of offers. Consequently, this competition for shelf-space leads to significant asymmetries in retailer costs, making retail collusion more arduous. Lastly, a supplier might gain from fostering competition in the retail market, redirecting customers away from a retailer where it serves as a secondary supplier and toward a retailer where it acts as the primary supplier. These spillover effects erode the capability of firms to engage in strictly horizontal collusion. A potential solution involves establishing a vertical collusive ring that includes wholesalers and retailers in a collective collusion agreement, aiming to mitigate asymmetries between firms.
So, third parties may be able to facilitate collusion in some contexts by helping cartels overcome the challenges associated with coordination and information sharing necessary for the operation of a cartel. Pricing algorithms are often developed by third-party software providers and are potentially sold to multiple competitors within the same market. These algorithms collect data from each competitor and use them to optimize prices. In this context, it is often alleged that algorithms may play an analogous role to the “hub,” sharing data across competitors and potentially jointly coordinating their pricing. We discuss whether this is a plausible argument, and the role of pricing algorithms in competition more generally, in the following section.
Algorithmic Pricing Software as a Third-Party Facilitator of Collusion
The development of new autonomous algorithmic pricing software that removes pricing decision-making from individuals and firms and allocates it to machine-learning algorithms has increased concerns about anti-competitive behaviour in general, and price coordination in particular. While firms have been using algorithms for pricing assistance for decades, recently pricing software has undergone an evolution from hard-coded rule-based systems to more autonomous and flexible data-driven machine-learning / artificial intelligence models. These algorithms possess increased ability to handle large volumes of data, and to accurately and quickly perform price calculations. They are also able to learn from the past, and model the future, more accurately than ever before.
Real-world information about algorithmic pricing software is sparse, since software developers and providers do not want to disclose specifics about their proprietary algorithms’ functions and specifications. Nonetheless, their promotional materials convey some sense of the functionalities of the software. Descriptions of the algorithms portray these systems as leveraging artificial intelligence and machine learning. These descriptions highlight the algorithms’ capacity to assimilate data concerning market conditions, both their own and rival companies’ pricing, sales volumes, and expenses. The algorithm undergoes training using historical data on these variables and integrates real-time information for current decision-making processes. Subsequently, the resulting outcomes derived from the chosen prices feed back into the algorithm as supplementary inputs, initiating the cycle anew. Employing reinforcement learning methods, these algorithms favor strategies (i.e., selected prices) that previously demonstrated success in increasing profits, increasing the likelihood of their future use. Ultimately, the promotional materials describe higher profits for the adopting firm.
Coordination Across Algorithmic Software
The worry is that competing algorithms, tasked with maximizing profits while tracking competitor prices and modelling consumer responses, will “understand” that higher prices and profits are mutually beneficial, and will therefore engage in collusive strategies to maintain high prices and profits: for example, by penalizing competitors who lower prices. The end results are high prices and profits, and lower consumer welfare. Put otherwise, algorithms would work to facilitate implicit / tacit collusion. This has been the general view expressed by multiple antitrust agencies and experts in competition law over the last decade.
The economics literature has been notably slow to embrace the idea that algorithms can increase prices by softening competition. In theory, tacit collusion arises without explicit communication between competitors, simply based on competitors’ expectations that their rivals will punish them for deviating from a high price. It is not clear that algorithms are capable of forming such expectations in the absence of explicit communication. However, more recently, the literature proposed a number of mechanisms through which sophisticated autonomous pricing algorithms can raise prices.
First, new types of algorithms may be better at understanding consumer preferences and predicting demand fluctuations. They do this by absorbing information from the environment and incorporating it into their models. Collusive arrangements often collapse when cartel members are incapable of distinguishing (i) purposeful deviations from high collusive prices by members of the cartel (undercutting) from (ii) price deviations that come because of shocks to the markets and to demand conditions. In that sense, improvements in demand predicting algorithms help firms better distinguish between the two cases and sustain collusive arrangements. Economic theory suggests that, in many cases, improvements in demand prediction could increase prices and assist anti-competitive behaviour.
Second, new types of algorithms are able to significantly speed up firm pricing response times. Intuitively, this means that the algorithms can both detect and punish deviations from high price tacitly collusive equilibria much faster than in the pre-algorithmic world. More concretely, algorithms may play a role in simplifying the process for companies to monitor their rivals’ pricing choices, swiftly identify undercutting, and penalizing those who deviate from the established tacitly collusive pricing norms. If undercutting yields the deviating company only a brief surge in profits before prices collapse, it diminishes the likelihood of companies defecting and disrupting the tacitly collusive agreement. As such, prices should increase relative to a non-algorithmic price setting.
Third, even without explicit communication between algorithms, a class of “reinforcement learning algorithms” experiments with random actions in order to explore what possible outcomes the algorithm can achieve. This means that sometimes the algorithm experiments with actions that are unprofitable for it in the short term, but that may be very profitable in the long run. Through this process of experimentation, therefore, the algorithms may learn to simulate punishment strategies (and expectations about rivals’ punishment strategies to their own deviations), which capture precisely this form of unprofitable short-run actions. It may not be profitable to punish a competitor for deviating from a collusive agreement in a previous period because the punishment profits are low for both firms. However, a sophisticated forward-looking algorithm may understand the long run importance of such strategies. A recent study using computer simulations of pricing algorithms in a simple environment found evidence for this mechanism. The study showed that forward-looking reinforcement-learning algorithms set supracompetitive prices, and that deviations from such high prices are punished by competitors in such a way as to make the original deviation unprofitable.
Coordination Using Same Algorithmic Software
These three explanations are all based on the notion that human actors choose to employ algorithms that manage to learn to collude more efficiently than could humans. It is important to distinguish these accounts from two other ways in which algorithmic pricing software might facilitate collusion: (i) humans may deliberately employ algorithmic software to coordinate a collusive arrangement, and (ii) humans may ‘unintentionally’ employ the same algorithm that combines their pricing info in a way that leads to collusion. We discuss each in turn below.
The first method is referred to as the Messenger scenario. In this scenario, participants communicate to each other that they will deliberately delegate pricing to the same algorithm. This is exactly equivalent to traditional forms of collusion, but with the algorithm employed as a tool to help with price setting by cartel participants, who have all agreed that this is the form the collusive arrangement will take.
The first such case involving AI-driven algorithmic pricing in the US was based on an instance where humans deliberately employed algorithmic pricing software to coordinate a collusive arrangement. In United States v. Topkins, the defendant, David Topkins, was accused of conspiring with other poster sellers to fix and maintain prices on Amazon.com Inc’s Amazon Marketplace website for third-party sellers from Sept. 2013 to Jan. 2014. The companies involved directly discussed the price of posters, and, more importantly, agreed to employ an algorithm to coordinate their activity. Perhaps not surprisingly, Topkins agreed to plead guilty for conspiring to fix the prices of posters sold online and pay a fine. This is consistent with the fact that, when establishing liability in cases of non-algorithmic (human) collusion, most antitrust authorities have focused on explicit communication between rival companies.
The second way is more complex than the first, since there is no explicit communication between the competitors with respect to the algorithmic adoption decision or collusion. Instead, competitors in the market implicitly understand that relegating their pricing to, and pooling their information with, a common algorithm is going to increase their prices and profits above the competitive equilibrium. It is not difficult to imagine such a case. In many ways, the scenario where a centralized algorithm sets prices for multiple competitors in a market relaxes the constraints economic theory places on the plausibility of algorithmic collusion. For instance, the central coordinated algorithm does not need to be forward looking, since in many cases there is relatively low risk that the companies relegating their pricing to the algorithm are going to unilaterally deviate from the pricing set by the algorithm. The algorithm also does not need to be particularly sophisticated—all it needs is to perform some form of static joint profit maximization.
In RealPage and Rainmaker, the allegations do not imply that the adopting firms deliberately set out to use the algorithms to facilitate collusion. That is, firms are not alleged to have communicated to each other their decisions to delegate decision-making to a third-party algorithm. Nonetheless, the outcome of rival firms adopting the same algorithm is similar to a hub-and-spoke arrangement, where there is no communication between spokes, but the hub intermediates.
Many industries feature their own set of pricing algorithms, produced by a relatively small number of suppliers. It is easy to imagine a case where multiple companies that compete in the same market would adopt the same algorithm. This is especially likely, since many third-party developers discuss previous adoption cases on their websites. Sometimes these cases refer to particular companies that potential adopters may be competing with. In some cases, software providers seemed to market their products in ways that let firms know that their data may be merged with those of other users / rivals, raising potential questions about the intent of adoption.
Although there is an intuitive argument for the widespread usage of common algorithms as implicit coordinating devices that raise prices for consumers, economic theory suggests that incentives to do so may not be so clear cut. A recent paper develops a theoretical model in which the designer of an algorithm takes into account that the algorithm may be adopted by multiple firms in the same market, and effectively “compete against itself.” The algorithm’s designer chooses to maximize their own payoffs rather than the payoffs of each of the adopters. This could create incentives for the algorithm to coordinate prices across adopters and increase adopter prices and profits, which would then be appropriated by the developer through higher fees. The paper shows that prices do not actually increase in such a setting, since the algorithm’s designer would like more firms to adopt the algorithm, and higher market prices also increase payoffs to non-adopters and therefore provide incentives to avoid adoption. Rather, compared to a non-algorithmic individual price setting equilibrium, prices become more sensitive to demand fluctuations, which allows algorithmic adopters to better exploit strong demand and extract more consumer surplus. Profits are therefore higher and consumer surplus is lower, even though average market prices do not change.
A follow-up paper further investigates price setting using a common algorithm, showing that there are distinct features to coordinating pricing algorithms that increase consumer prices and are akin to traditional “hubs.” The paper proposes a plus factor for identifying unlawful agreements between third-party pricing-software developers and the firms in a market that adopted it. The test is to examine whether the average price of adopters is increasing with the adoption rate. The test is centered around the idea that a company designing its own pricing algorithm will do so in such a way as to maximize its profits from employing the algorithm, while, in contrast, a third-party developer will design the algorithm to maximize its profits from selling it. If adoption decisions are independent, a third party’s optimal pricing algorithm establishes the average price at a competitive level. Conversely, in cases of coordinated adoption decisions, the third party aims to maximize the profit from adoption, leading to a higher average price instead of focusing on the contrast between adopting and not adopting. This coordinated adoption scenario mirrors a partial cartel in which only a fraction of firms are members of the cartel. The third party resembles a cartel manager, selecting the collusive price. The higher the fraction of firms that are part of the cartel, the more inclusive it is, resulting in a higher collusive price. Consequently, the average price increases, since there are fewer non-cartel members to undercut the elevated average price set by adopters. Hence, in coordinated adoptions, the collusive price reflects an average price that rises with the adoption rate. Conversely, with independent adoptions, the pricing algorithm maintains an average price at the competitive level, irrespective of the adoption share.
Empirical Research on the Impact of Algorithmic Pricing
Given the conflicting theories regarding the impact of algorithmic pricing, economists have turned to empirical research to examine the actual effects of algorithmic pricing. The best available empirical evidence on the impact of algorithms on prices, and about whether they facilitate coordination, can be found in two recent papers. The first conducted empirical analysis investigating the consequences of widespread implementation of algorithmic pricing software using data from the German retail gasoline market, where industry reports indicate that AI-powered algorithmic pricing was introduced to the market 2017. This task of evaluating the impact of station-level adoption is challenging due to the lack of observable data regarding companies’ decisions to adopt such software. Given the general reluctance of companies and stations to disclose their adoption of algorithmic pricing technology, the study developed a novel method to pinpoint stations that became algorithmic users, relying on price data accessible through Germany’s transparency regulations—effectively, it was possible to observe stations making sharp changes in their pricing behaviour (e.g., number of price changes per day) around the period when AI-powered pricing became available. After identifying adopters, the study presents evidence on the impact of adoption concerning competition. The results indicate a substantial increase in profit margins, which is not visible for monopolist adopter stations (e.g., stations without nearby competitors). The increase in prices and margins after adoption is particularly noticeable in oligopolistic markets where all competitors had adopted AI-pricing. Although there is no evidence that the stations adopted the same algorithm, the data do pass the main test for unlawful agreement described above: the average market price rises as the share of adopters increases—e.g., as the market moves from incomplete to complete adoption. In that sense if there were any additional evidence of competitors using the same algorithm in the German gasoline retail market, that would indicate some likelihood of algorithmic pricing technology facilitating coordinated actions among market players.
The second paper empirically investigates the influence of algorithmic pricing on the U.S. multi-family housing rental market. The authors consider two possible roles for algorithms: (i) that they help landlords better price as a function of demand conditions, and (ii) that they facilitate collusion. They hand-collected information on the dates that landlords adopted algorithmic pricing software, and then investigated the impact of such adoptions on rental prices. They found that by 2019 almost all the large property-management companies in the US and roughly a quarter of the buildings in their sample had adopted pricing algorithms. Their findings suggest that such adoption allows landlords to set prices that are more responsive to demand conditions. Looking across markets they found that cities with greater levels of adoption featured higher prices and lower occupancy. These results did not allow the authors to conclude whether the price increases that followed adoption were due to coordination or to better pricing. In attempt to get at this issue, the authors conducted certain tests for changes in competitive conduct that have been proposed in the literature. The results provide support for models of some / imperfect coordination across landlords over own-profit maximization.
Takeaways for competition policy
Automated pricing tools have the potential to learn to synchronize prices, suggesting that if extensively adopted, they could facilitate collusion and lead to increased prices. This will be particularly problematic in settings where pricing data for the entire market are easily accessible (e.g., under price transparency regulations, or for online markets with high degrees of price visibility).
The challenge encountered by policy makers is to find the balance between encouraging companies to use pricing algorithms to the best of their abilities (e.g., using all accessible data for pricing strategies) and the risk that this practice might foster implicit agreements or collusion, leading to extended periods of elevated prices and profits. Pricing is hard, and algorithms offer assistance to businesses in considering a broader spectrum of data for more effective price setting, increasing market efficiency. It is pivotal to discern the components (rules) within a pricing algorithm that transcend improved data aggregation and information extraction, to potentially verge into aiding collusion. This distinction between efficiency gains and facilitation of collusion is vital to harnessing the advantages of this innovative technology while safeguarding competition from adverse effects.
In the realm of algorithmic pricing, companies may operate without direct communication or even a tacit agreement, yet consumers might still encounter elevated (collusive) prices. Consequently, the independent adoption of algorithmic pricing presently may not breach antitrust laws targeting competitor agreements in most jurisdictions. In determining culpability regarding non-algorithmic collusion, most antitrust authorities have prioritized evidence of explicit communications among rival companies, as compared to information about the rules constituting the collusive arrangement, or the consequences of cartel arrangements, such as increased prices. This prioritization arises from both explicit communication being a direct proof of collusion, and from the challenge of definitively proving that heightened prices stem from collusive behavior. Additionally, when firms or individuals coordinate, collusive pricing rules are rarely explicit, and so are difficult to build a case around. Altogether, in the absence of explicit communication, mounting successful collusion cases is challenging.
The recent case against Rainmaker was dismissed by the judge precisely for reasons related to the lack of explicit communication between competitors in algorithmic adoption. The judge stated that plaintiffs failed to “plausibly allege Defendants entered into an agreement.” Further, the judge’s opinion also states that it is “impossible to infer that all Hotel Operators agreed to use the same” algorithm.
However, as discussed above, the marketing of pricing algorithms by software developers often includes cases and examples of previous adoption experiences and the effects of these adoptions on firm profits. In a concentrated market, it may be clear to a firm that its competitors have already adopted a particular algorithm. Even without any explicit discussion, this would allow the firms to coordinate on the same algorithm. While unilateral adoption of the same algorithm as a competitor does not violate section 1 of the US Sherman Act or equivalents in other jurisdictions, in some circumstances unilateral adoptions of algorithms could be challenged under abuse of dominance provisions in antitrust regulations.
Some economists propose a shift in focus for antitrust laws and the regulatory environment: Rather than targeting explicit communication, attention should center on scrutinizing the collusive pricing rules embedded within automated/autonomous algorithms. Unlike collusion driven by humans, pricing strategies in autonomous algorithms are not concealed but explicitly outlined in the code. If specific rules are identified that heighten the likelihood of collusion, regulatory bodies could theoretically mandate their exclusion from algorithms. This could hold particular significance for firms utilizing the same algorithmic pricing software. For example, regulations might restrict how an algorithm may set prices based on rival prices. While allowing real-time adjustments in response to demand and supply factors, the pricing rule may restrict how an algorithm can respond to competitor prices, or information pooling across different competitors could be banned. Of course, this involves a delicate balance, since it is important to allow firms to integrate relevant additional information to enhance their pricing strategies without being punished. One way to do this might be to forbid rules that reward or penalize rivals based on their observed price setting, but allow firms to react to them in more ‘reasonable’ ways. We should note that the costs associated with monitoring and enforcing such a pricing mechanism, may be extensive, both for the regulator and for the regulated firms. In addition, the idea of ‘reasonable’ competitive reactions is challenging to operationalize and get ‘right’ in practice—these may vary with the circumstances the algorithm faces. Nonetheless, restricting the hard-coding of ‘trigger’ strategies into autonomous algorithms is one possible restriction regulators can straight-forwardly set. Moreover, restricting information pooling across competitors in a given algorithm/ third-party provider may be easier to implement and monitor.
If moving the direction of regulating the operation of pricing algorithms, it would also be important to make clear to companies that they are responsible for their pricing, even if they delegate authority to an algorithm. This implies that companies must understand the actions of the algorithms they employ and the contents of their pricing rules. Specifically, they should discern whether the pricing rule solely factors in demand and supply dynamics or also includes competitor prices. Moreover, if competitor prices are indeed used by the algorithm, they should be aware of whether the algorithm’s rules incorporate a reward-punishment mechanism.
Furthermore, regulations could mandate risk assessment and harm identification evaluations before and after, respectively, the deployment of algorithms. This proposition aligns with suggestions put forth by members of the French competition authority, John Moore, Etienne Pfister and Henri Piffaut, who suggest that firms might be required to test their algorithms before deployment against actual market conditions and to evaluate the impact of algorithms after they have been deployed.
Of course, if regulators or antitrust agencies are to determine whether problematic pricing rules are being employed and what their degree of harm will be ex ante or ex post, they need to overcome three challenges: (i) first, they have to be able to identify adopters of algorithmic-pricing software, (ii) second, they must be able to determine the pricing rules being employed that relate to rivals’ strategic variables, and (iii) third, they must be able to distinguish between those rules that react to competitors’ prices in legitimate, competitive ways from rules that employ reward and punishment mechanisms for pricing deviations. To overcome the first challenge, requires performing a census of software adoption. Currently we have no idea which (if any) firms have adopted pricing software, or what software they selected. Overcoming the other two challenges will require employing agents trained in economics and algorithmic coding. Naturally, artificial intelligence may be part of the solution and help authorities with these challenges.