Notwithstanding these observed efficiencies, the use of sophisticated pricing algorithms has led commentators to voice two primary concerns: (i) that price fixing and collusion—particularly tacit collusion—may be more prone to occur and more difficult to detect; and (ii) that firms may use these algorithms to price discriminate in ways that may be considered inequitable and invasive.
Price Fixing and Collusion. Economic theory suggests that the use of pricing algorithms may lead to supracompetitive pricing across firms without an explicit agreement and even without human input or knowledge (i.e., that pricing algorithms may facilitate “tacit” collusion). Following this literature, antitrust authorities have raised concerns that:
- The use of the same third-party pricing algorithm tool by horizontal rivals may facilitate “hub and spoke” arrangements where, under certain conditions, the third-party pricing algorithm (the “hub”) may be in a position to enable competitors (the “spokes”) to coordinate pricing; and
- The use of different pricing algorithms by horizontal rivals may also lead to scenarios where firms jointly increase prices. In theory, in some of these scenarios, the algorithms may have autonomously learned to collude.
The first concern has been raised in a number of recent complaints related to hotel room prices and apartment rental rates, including Gibson v. MGM Resorts International, Fabel v. Boardwalk 1000, and In re RealPage, Rental Software Antitrust Litigation. We devote the following discussion to the second concern, referred to below simply as “algorithm-facilitated tacit collusion.”
While algorithm-facilitated tacit collusion can occur in a number of scenarios, whether it is more or less likely to occur will depend on a number of factors, including the characteristics of the industry and pricing algorithm. The academic literature has studied the likelihood that pricing algorithms learn to collude or set supracompetitive prices: the evidence thus far is inconclusive (in large part because the availability of empirical evidence is limited), but the question is a subject of continued analysis and discussion. Most of the existing research assessing the risk that independent AI-powered pricing algorithms will autonomously learn to collude relies on simulations in which simple pricing algorithms interact in controlled, synthetic environments. A few key themes have emerged from this research.
First, economic theory predicts that collusion (whether tacit or explicit) becomes harder to sustain as the number of competitors increases. It is unlikely (though not impossible) that the use of pricing algorithms would change this calculus, especially in unconcentrated markets, in markets with entry over time, or in markets with low barriers to entry. There may be many different ways for firms to set supracompetitive and profitable prices, and different firms may benefit to different degrees. Thus, algorithms may need to be specifically programmed to coordinate and settle on the same collusive outcome, which would become harder to sustain as the number of competitors increases.
Second, pricing algorithms may help firms more accurately predict periods of high demand where firms can profitably (and independently) set higher prices. In such periods, a cartel might want to set even higher (supracompetitive) prices. Paradoxically, however, in such periods, incentives to deviate from a collusive price are highest: the profits from deviation during this higher demand period would exceed losses incurred from punishments for deviation, because those punishments would end up occurring in lower demand periods. This could serve to limit a cartel’s ability (and thus the incentive of pricing algorithms that have learned to collude) to set supracompetitive prices in the first place.
Third, a developer of a commercial pricing algorithm may have an incentive to design an algorithm that avoids collusive pricing. After all, a new firm that is considering whether to adopt a pricing algorithm that sets supracompetitive prices may find it more profitable to undercut the prevailing price rather than to adopt the algorithm.
Fourth, even where pricing algorithms may learn to collude, they must undergo a learning process, which may be slow (on the order of hundreds of thousands of price-setting periods), as algorithms must learn from each price change based on how the marketplace reacts and calibrate and re-calibrate accordingly.
Still, to the extent that antitrust authorities elect to bring enforcement actions related to algorithm-facilitated tacit collusion, it is critical that they are prepared to (i) accurately identify circumstances in which pricing algorithms result in firms jointly increasing prices; and (ii) explain how those circumstances fit within antitrust law.
The traditional tools used to assess suspected collusive arrangements, such as the review of evidence of communications and the assessment of the price effects of such communication, may not be effective as the use of pricing algorithms may result in firms jointly increasing prices without explicit instructions or an explicit agreement. In the United States, firms obtaining supracompetitive prices in the absence of explicit agreement (i.e., tacit collusion) are not necessarily in violation of antitrust law.
Connecting what looks like supracompetitive pricing to alleged algorithm-facilitated tacit collusion could be complicated due to challenges in distinguishing the effects of (tacit) collusion from legitimate business decisions. For example, the adoption of pricing algorithms may lead to higher prices on average simply because the algorithms may reduce the incidence of human error and biases or may unilaterally determine that higher prices would lead to higher profits for a given set of market conditions.
The DOJ, however, appears to have recently taken the stance that use of a shared pricing algorithm may, under certain conditions, constitute a price fixing scheme because it joins “competing [firms] together in the pricing process…. It makes no difference that the confidential pricing information was shared through an algorithm rather than through ‘a guy named Bob.’” There have not, on the other hand, been similar statements on firms’ use of different pricing algorithms.
While auditing algorithms may be one possible solution to identify instances where algorithms are suspected of facilitating joint price increases, audits may not always yield meaningful insights because the rules of the algorithm or intent of the programmers may not be decipherable. Some algorithms, for example, rely on “deep learning technology,” which, while very powerful, does not provide programmers with visibility into the decision-making process that leads to a particular outcome. Even the employees instructing and monitoring an autonomous algorithm may not know the details underlying specific decisions undertaken by the algorithm.
As more and more firms turn to AI-powered pricing algorithms in a given industry, antitrust authorities will be faced with the further challenge of establishing a competitive benchmark against which to assess harm to consumers. Identifying the correct—or even the likely—but-for world may become increasingly difficult as pricing technology improves: this is an area where careful economic analysis continues to be necessary.
Price Discrimination. The second concern raised by commentators related to the use of sophisticated pricing algorithms focuses on firms’ potentially enhanced abilities to price discriminate. Price discrimination is an economics term used to describe the practice of charging different prices for the same product or service to different customers or customer segments. For example, a firm may provide discounts to customers who buy in bulk (a common example of “second-degree price discrimination”) or specifically to students or loyalty program members (a common example of “third-degree price discrimination”).
Most of the open questions around algorithms and price discrimination, however, center around the concept of “first degree price discrimination” (also known as “personalized pricing”), where a firm may charge different prices to individual consumers based on their individualized preferences and their varying willingness-to-pay. This means that, compared to an environment with a “one size fits all” price, consumers who value a product may end up paying more and others may end up paying less.
Algorithms that are created to develop extensive price menus based on consumers’ individualized preferences, purchasing histories, and backgrounds may facilitate this type of price discrimination. Despite the fact that perfect price discrimination is Pareto efficient, and despite common, well-accepted examples of first-degree price discrimination (e.g., airplane seat upgrades, the homebuyer market, ride-sharing services), such price discrimination has raised equity and privacy concerns, in part because the welfare effects of price discrimination may be context-specific. Even though such price discrimination increases total welfare, market structure and industry characteristics can affect the way in which the social surplus is allocated between consumers and firms in equilibrium. The privacy concerns that may arise are similar to those discussed in the context of targeted advertising, where consumer groups have expressed the desire to want to control the type and amount of data available to advertisers in generating targeted or personalized ads.
For now, it is not clear that there is a need for antitrust enforcers to directly address personalized pricing in the context of competition law. Still, antitrust intervention could be necessary if economic analysis shows that personalized pricing has had a negative and persistent effect on consumers. The Robinson-Patman Act has been one of the ways through which the United States has tried to address potential harms from price discrimination in certain circumstances. Though it has been sparsely used since the 1980s, President Biden and FTC commissioners have recently signaled a strong interest in reviving the use of the Robinson-Patman Act, albeit ostensibly not due to concerns about the effect of AI-powered algorithms on firms’ abilities to personalize pricing.
Recommendation Algorithms
Firms that are able to collect data on their customers may use this data to provide recommendations for both new and returning customers. This may occur in a variety of areas and industries, ranging from music and video recommendations, to search engines and online retail, to financial services, social media, and even online dating. Firms have recognized that providing recommendations benefits consumers, as this can reduce search costs and facilitate product discovery.
AI-powered recommendation algorithms may improve a firm’s ability to provide relevant, efficient recommendations. For instance, Netflix’s current recommendation system employs machine learning and deep learning technologies in determining which movies or TV shows appear on a user’s home page. Consequent improvements over time in the relevance of Netflix’s recommendations resulted in increased member retention. AI-powered algorithms may also be able to more flexibly and quickly update results in response to new data such as changes to product characteristics (e.g., prices, reviews, availability), further improving a firm’s recommendations.
Antitrust practitioners have recently become more interested in these recommendation algorithms in contexts where the firm offering the recommendation service also offers some of the recommended products. Firms often play such dual roles. (This has been historically true as well, including outside of the digital context.) For example, several video streaming services produce their own content and will recommend it alongside third-party content; consumers searching for products using an online retailer’s website may see results for private label products that are produced by that retailer; and app stores may recommend apps that were developed by third parties alongside those developed by the app store operator.
Some in the antitrust community have suggested that these firms may have the incentive to “self-preference,” or recommend their own products at the expense of competing, third-party products. The concern is that through self-preferencing, such firms may be able to use their algorithms to avoid competition on the merits, to foreclose downstream competitors, and generally to mislead consumers. These concerns have given rise to allegations in matters such as Rumble, Inc. v. Google LLC et al and have been featured prominently in discussions about large technology companies in general.
Not surprisingly, lawmakers and regulatory bodies have enacted, and tried to enact, legislation to prevent self-preferencing conduct. For example, the EU Digital Markets Act (DMA) states that “the gatekeeper [defined in the DMA as a digital platform that meets minimum thresholds for revenues, monthly active end users, and yearly active business users] shall not treat more favourably, in ranking and related indexing and crawling, services and products offered by the gatekeeper itself than similar services or products of a third party.” The U.S. Congress has been considering the American Innovation and Choice Online Act, which addresses self-preferencing by larger firms and is focused on preventing self-preferencing that results in “material” harm to competition. The U.S. 2023 Merger Guidelines also suggest that the U.S. antitrust enforcement agencies would carefully assess whether a merger between a “platform operator” and “platform participant” would incentivize self-preferencing that would harm competition.
However, instances where a firm appears to recommend its own products first may not be instances of self-preferencing. Perhaps those products are, truly, more relevant for consumers: better designed, higher quality, lower price, or otherwise a better fit for a given search query. For example, a recent academic study analyzing Amazon’s recommendation algorithm finds that while this algorithm may sometimes appear to favor Amazon’s own products, the algorithm’s ranking instead reflects consumer preferences for these products and increases consumer surplus by $9 per product per month.
Furthermore, there is debate about whether incentives to self-preference exist in the first place. For example, in self-preferencing inferior products, a platform (an intermediary that connects two or more sides of a marketplace) may adversely affect the prices, quality, or variety of other products on the platform, which could drive consumers away. This could reduce the platform’s ability to compete against other platforms, and may encourage entry by new platforms with better offerings. Indirect network effects could lead to a negative feedback loop in which a gradual departure of participants from one side of the platform induces participants on the other side of the platform to leave, and so on.
Identifying and detecting true instances of self-preferencing is complex and costly. Without access to the inner workings of an algorithm, competition authorities and plaintiffs may not be able to definitively determine anticompetitive intent. As the AI underlying these algorithms improves, it will be even more difficult for antitrust authorities to detect true self-preferencing behavior. This will serve to enhance the importance of the role that economic analysis plays in evaluating whether self-preferencing (i) occurred; (ii) foreclosed competition; and (iii) (even in the absence of foreclosure) resulted in harm to consumers.
A number of suggestions have been proposed to assess whether a given recommendation algorithm is providing manipulated results (i.e., whether the metrics the algorithm uses to rank products incidentally or purposely favor a given firm), with various strengths and weaknesses.
For example, some commentators have suggested identifying comparator algorithms that serve the same purpose but that are not suspected of self-preferencing, and comparing results from those algorithms with the one at issue (e.g., comparing search results between search engines). However, doing so would require the researcher to find a comparator algorithm that unarguably provides recommendations that are relevant to a consumer’s search query. Yet, discerning consumer intent behind each search query is a complex question; in fact, recommendation algorithms compete by innovating and continuously trying to improve their ability to do just that. Similarly, different algorithms may be trained on or updated with different data, depending on the types of users that use them. As a result, relying solely on comparisons between algorithms’ recommendations could be misleading.
Commentators have also suggested that self-preferencing can be identified by comparing search query results that appeared before and after the alleged initiation of self-preferencing conduct. However, such analyses could be subject to a number of confounding factors, which would be increasingly difficult to identify as the algorithms increase in complexity. Among other challenges, these analyses would need to distinguish between adjustments to an algorithm that allowed a new product to be ranked (a new product might otherwise not have been ranked given a lack of historical data on sales and consumer preferences), adjustments that reduced frictions for consumers (this could include, for example, allowing consumers to quickly and easily access the suite of Google services from the Google search page), and adjustments meant to foreclose competitors.
Decision-Making Algorithms
In many ways, decision-making algorithms provide the same type of service to firms that recommendation algorithms provide to consumers: they provide options to allow an agent to meet specific goals more quickly. In instances where the sheer volume and constant influx of data make decision-making a daunting, resource-intensive task, AI-powered algorithms can sort through, organize, process, and interpret data in ways that may provide tremendous efficiencies for firms. For example, firms may want to rely on AI-powered algorithms to automate, streamline, and simplify hiring decisions by predicting which candidates may perform well in a given job. Medical providers may want such algorithms to reduce diagnostic errors, proactively manage patient health, and sort through patient medical histories, medical statistics, and information about medical advances to provide diagnoses or recommend treatment decisions. Banks may use algorithms to quickly process data, update for new information, and then generate decisions on loan applicant creditworthiness.
A key concern voiced in relation to these types of algorithms focuses on their potential to exacerbate societal inequities. This has been found to happen in at least three well-studied ways, even when the algorithm developers or users have no intention of creating or perpetuating bias.
First, if the data that the algorithm is trained on reflect patterns of bias, the algorithm’s recommendations may also reflect patterns of bias. For example, predictive policing algorithms may be susceptible to bias when they are trained on arrest data that is the result of discriminatory practices. As another example, an AI-powered algorithm meant to identify and rank candidates for traditionally male-dominated roles that is trained on data reflecting predominantly male hires may show patterns of prejudice against female applicants. Matters such as United States v. Meta Platforms, Inc. (related to the delivery of housing advertisements) and Louis et al. v. Saferent Solutions, LLC et al. (related to the screening of potential renters) raised similar questions.
Second, even if the data the algorithm is trained on do not reflect patterns of bias, if they are not representative or otherwise incomplete, the algorithm may still make biased recommendations. This issue predates algorithms but could be particularly problematic in the AI context given the speed with which the technology is evolving.
Third, the design of the algorithm itself may inadvertently perpetuate bias. For example, an algorithm may be set up to predict patient health needs by predicting future healthcare costs. But, as a 2019 academic study demonstrated, because of systemic issues that caused unequal access to care, minority patients have historically spent less on healthcare compared to white patients. This led to algorithmic predictions that minority patients were “healthier” than white patients who were, in fact, equally sick.
However, it would not be reasonable to believe that all instances where a group appeared to receive favorable treatment reflected actual or intentional algorithmic bias. Such claims need to be assessed carefully, through empirical economic analysis. While such concerns have traditionally been investigated by agencies tasked with protecting consumers and ensuring equity, the antitrust community has recently become increasingly concerned with racial and ethnic bias. In a 2020 speech, then-acting FTC Chair Rebecca Slaughter proposed that antitrust statues could be “deployed in the fight against racism” and suggested “focus[ing] on markets and anticompetitive practices where harm disproportionately falls on people of color.” Following this suggestion may entail prioritizing antitrust enforcement in some markets over others.
Conclusion
As more and more industries incorporate and embrace AI technologies, the competition landscape will continue to change rapidly, as will thinking about how best to protect competition. In the United States, for example, the DOJ and FTC have been focused on expanding their capabilities and developing their understanding of digital and AI-driven businesses, with a particular focus on generative AI. In 2023, the DOJ also reported having hired technology experts, economists with computer science and machine learning expertise, and other data scientists. Antitrust enforcers will need to continue developing the appropriate technological tools to identify potentially anticompetitive behavior while ensuring that concerns stemming from the novelty and complexity of AI-powered algorithms do not forestall our ability to benefit from them.