chevron-down Created with Sketch Beta.

ARTICLE

Antitrust in the Age of AI: Is the Consumer Welfare Standard Equipped to Address the Rise of Generative Artificial Intelligence?

Ryan Chapman

Antitrust in the Age of AI: Is the Consumer Welfare Standard Equipped to Address the Rise of Generative Artificial Intelligence?
gorodenkoff via Getty Images

Abstract

Amidst a period of heightened antitrust scrutiny around today’s technology firms, the technology of tomorrow brings new questions and challenges to light. Can generative AI disrupt the technology landscape, heightening competition against incumbent firms? Or will generative AI enable rent-seeking behavior targeted at unsuspecting consumers? And, fundamentally, how does the emergence of generative AI contribute to debates around antitrust policy standards? A review of the relevant academic literature, regulatory publications, and market research reveals that generative AI’s interaction with antitrust policy exposes key value tradeoffs, even though generative AI is likely to lower technical barriers to entry in many fields and improve total productivity. Specifically, generative AI’s applicability as a tool for personalized marketing, price discrimination, and so-called ‘behavioral’ discrimination can both (a) lead to increased producer surplus at the expense of consumer surplus and (b) contribute to forms of customer persuasion that may be considered harmful – all depending on one’s perspectives on paternalistic policymaking and areas of distinction between consumer surplus and welfare. Furthermore, the likely existence of economies of scale in the market for generative AI highlights the challenge of grappling with incumbent firm power (and potential resulting harm) that can ultimately be tied back to efficiency. Although the long-standing assumption of consumer welfare standard-based antitrust is that market failure is the proverbial lesser of two evils, generative AI may make possible increasingly sophisticated price regulation – a capability that may prove useful to mitigate harm that currently eludes detection under consumer welfare-based antitrust.

Acknowledgements

I would like to thank Emilie Feyler (Principal at National Economic Research Associates) for providing comments on an earlier version of this article. I would also like to thank Sean Flaim (Trial Attorney at the US Department of Health and Human Services) for providing comments on the policy standards section of this article, and for his mentorship through the ABA Antitrust Economics Ambassador Program.

I. Introduction

Despite the pre-eminence of consumer welfare-based antitrust enforcement over the last five decades, alternative perspectives have gained traction amongst regulators in recent years. With increased enforcement activity across entities, updated merger guidelines, and the nominations of Lina Khan as chair of the Federal Trade Commission (FTC) and Jonathan Kanter as the assistant attorney general within the Department of Justice’s (DOJ) antitrust division, so-called ‘Chicago School’ antitrust faces an increasing challenge from the ‘Neo-Brandeisian’ school of thought. This movement, named after former Supreme Court justice Louis Brandeis, revitalizes elements of so-called ‘Harvard School’ antitrust and emphasizes competitive market conditions themselves as a primary focus for antitrust regulation.

Many of the firms facing increased scrutiny from US antitrust authorities are technology-focused firms with large customer bases and market share, oftentimes offering free or low-cost services to US consumers. These firms are a major focus of popular media, policy discussions, and litigation due to the unique challenge they pose to the historical precedent for antitrust law. Although such firms face criticism for their size and market concentration, others champion the great benefits these firms have to offer US consumers, innovating heavily and operating with economies of scale to provide products and services that consumers want and frequently at very low prices.

Amidst this period of renewed activity in US antitrust, emerging technologies have also grown in relevance. In January 2023, OpenAI’s flagship large language model (LLM), ChatGPT, became the fastest consumer application in history to reach 100 million active users when it surpassed this threshold just two months after its initial launch. In July 2024, OpenAI again made headlines when it launched a new product, SearchGPT, entering into direct competition with Google and other search engines. Although early academic discussions concerning AI regulation precede such events, the recent growth in public consciousness around generative AI has significantly expanded the conversation around AI regulation. Not only has academic literature on AI regulation expanded, but the federal government has also started several initiatives relating to AI regulation, safety, and innovation.

Now, just as antitrust regulators are wrapping their heads around what to do with previously disruptive technologies (i.e., the proliferation of platform-based technologies and predictive AI), the question arises as to how generative AI fits into the consumer welfare discussion. Amidst the ongoing debate about policy standards and the evolution of emerging technology, does generative AI complicate this discussion and introduce additional competition risks? Or can it serve to displace incumbent firms and actually resolve competition concerns through technological disruption?

As I will discuss in greater detail throughout this article, a review of the relevant academic literature, regulatory discussions, and market research reveals that the dynamics of the technology sector that are currently posing a challenge to US antitrust authorities – historically equipped with the consumer welfare standard – will likely be further compounded by the increased adoption of generative AI technology. Although the proliferation of generative AI products promises to be greatly welfare-enhancing, the markets for these products are likely to tend towards concentration, and the products themselves can enhance the ability of firms to engage in anticompetitive conduct.

To illustrate this point, this article will explore (1) the different types of harm and regulatory incentives under the consumer welfare standard as opposed to a competitive conditions standard, (2) the current organization of the market for generative AI technology and foundational AI models, (3) potential antitrust concerns within industries that adopt foundational AI models, and (4) potential antitrust concerns amongst leading AI firms themselves. Within the subsequent discussions of potential antitrust harms, this article also seeks to identify paths for further empirical research to better understand the existence and/or magnitude of such harms.

II. Evaluating Antitrust Policy Standards: Consumer Welfare and Competitive Conditions

Since the late 1970s, the consumer welfare standard has served as the basis for antitrust enforcement in the United States. Inspired by the work of the neoclassical ‘Chicago School’ of economics, the consumer welfare standard as originally outlined in Robert Bork’s The Antitrust Paradox – establishes efficiency and consumer surplus as a primary focus of antitrust enforcement. The emphasis on consumer surplus (the difference between a consumer’s willingness to pay and the actual price of a good or service) generally allows for market concentration, so long as that market concentration results from greater efficiency and hence does not result in increased prices, decreased quality, and/or decreased quantity for consumers.

Embedded within the consumer welfare debate, there also exists a question of terminology – given the interaction between economists and legal scholars with potentially varying interpretations of ‘consumer welfare,’ and the related concept of consumer surplus. On the one hand, economists refer to consumer surplus as the difference between consumer willingness to pay for a product and the actual price paid for the product. Despite the use of different terminology, the argument advocated by Bork and put into practice under the consumer welfare standard is largely related to this idea of consumer surplus and allocative efficiency.

Consumer welfare, on the other hand, is more difficult to properly measure and depends in-part on value judgments regarding consumer preferences and consumer rationality. Although economists sometimes use consumer welfare synonymously with consumer surplus, this is typically based on the assumption that people behave rationally and also that their preferences should be respected. The classic example where these concepts diverge is addiction, where (a) consumers may be perceived to behave irrationally or (b) consumer preferences or utility may be considered adverse to a broader sense of well-being. As a result, consumer surplus can be understood as a valuable tool in welfare economics while still being conceptually distinct from welfare itself. For the sake of this article, the following sections will interpret the consumer welfare standard as it exists in practice, primarily related to consumer surplus and efficiency, rather than encapsulating a more holistic understanding of welfare.

Both under the consumer welfare standard and across other potential regimes, there are a variety of societal goals regulators may emphasize. Alternatively to the consumer welfare standard, there exists the potential for a total welfare standard (in which the surplus for both consumers and producers is considered) and a competition standard (in which ensuring ‘competitive conditions’ is given preference over efficiency), amongst others. The primary debate amidst these competing antitrust standards in the United States is whether or not the consumer welfare standard should be applied more rigorously or even deprioritized in lieu of competitive markets and conditions.

In evaluating this debate and how these standards might best apply to the market at the focus of this article (generative artificial intelligence), it is helpful to first identify the general strengths and weaknesses of each regulatory approach and the harms these standards are best equipped to address.

The strength of the consumer welfare standard lies in its simple concept: if no harm to consumers can be demonstrated that outweighs other surplus-enhancing effects, then enforcement is unnecessary. Proponents of the consumer welfare standard particularly emphasize that its simple structure minimizes any cost to consumers that may be incurred through regulation motivated by value-based arguments not directly tied to consumer harm. Further, the consumer welfare standard is particularly oriented towards empirical methods. Under this standard, a sufficient understanding of prices, quality, and variety of goods/services is necessary in order to show welfare effects with and without the alleged anticompetitive conduct. As a result, the consumer welfare standard is best equipped to deal with harm that both (a) concerns the U.S. public as consumers, and (b) involves costs and benefits to consumers that can be shown through empirical means. If these issues are the predominant or sole focus of a given regulatory regime, then the consumer welfare standard is likely sufficient. However, if antitrust authorities, courts, or policymakers are interested in non-consumer related harms, or consumer harms that do not lend themselves to be readily shown through empirical means (e.g., political externalities), other standards may be preferable.

Proponents of a competitive conditions standard – seeking to expand the current scope of antitrust – would argue that emphasizing competitive processes themselves would better address these variable forms of harm. For instance, to the extent that harm to consumers may exist in markets for which actual impact on consumer surplus is difficult or impossible to prove empirically, the competitive conditions framework would seek to avoid any such harm through maintaining competition amongst firms who would then compete on price and/or quality aspects for consumers.

Further, given that individuals interact with the economy both as consumers and as producers, the competitive conditions standard looks to address potential harms that may fall on individuals as producers as well. For example, given that individuals participate in the production of goods and services through both labor and capital investment, the competitive conditions standard would seek to address distributional effects in both labor and investment. Although the focus on competitive conditions does not directly seek to calculate producer welfare effects, the focus on improving competitive conditions applies to all markets where individuals act as either buyers or sellers (including any upstream labor and capital markets). Thus, under the competitive conditions standard, regulators may seek to address both monopoly power in the markets for consumer goods and services and monopsony power (such as in labor markets). Additionally, proponents of the competitive conditions standard argue that there are also political harms that may result from increased market concentration through the influence of political or governance processes. To the extent that such harms exist, the competitive conditions standard may be better equipped to address (a) consumer harm that does not lend itself to empirical methods, (b) producer harms, and (c) externalities that impact citizens as members of the broader public.

However, it is important to clarify that proponents of the consumer welfare standard do not disregard the importance of these additional areas for harm (such as income inequality and political influence), but instead argue that antitrust itself is not the avenue to govern such issues. For example, a report from the Information Technology and Innovation Foundation in 2018 points to consumer protection laws, tax policy, vocational training, the Federal Communications Commission, campaign finance reform, and public subsidies as alternative tools or regulatory avenues to address issues around privacy, inequality, employment, and political transparency.

In addition to the relevant harms, it is also important to acknowledge the incentives and deterrence caused by any such antitrust standard. As discussed above, the consumer welfare standard touts itself as the optimal approach to minimize costly enforcement decisions to consumers while incentivizing innovation, whereas the competitive conditions standard may lead to enforcement decisions without empirical support for consumer or producer surplus effects. Under the consumer welfare standard, courts attempt to minimize the “error costs,” the costs associated with wrongfully preventing welfare-enhancing conduct and wrongfully allowing welfare harming conduct, and “decision costs,” the costs of administering the regulatory and legal regime. One potential externality that may not be fully encapsulated by the consumer welfare standard is the deterrence effect of antitrust policy. Under the consumer welfare standard, any single case-by-case welfare analysis might not generate a regulatory intervention, even if such a regulatory decision would be helpful in preventing future abuses of dominance through deterrence.

With these considerations in mind, this article will seek to identify those issues (arising in the markets for generative AI) that may be properly addressed by the consumer welfare standard based on both the type of harm and any potential externalities or incentives that may result from regulatory intervention. As will be discussed in greater detail below, multiple countervailing policy priorities are likely to interact in the markets for generative AI. Although regulations around AI safety, risk management, and copyright enforcement are likely to prevent some forms of harm in the markets for generative AI, these regulations may also serve as an additional fixed cost and barrier to entry. While the consumer welfare standard may be an ideal approach to prevent harms to consumers that can be empirically evaluated (helping prevent costly enforcement decisions), it may be insufficient for other forms of harm to consumers, producer harms, and deterrence of future anticompetitive behavior.

III. The Market for Generative AI Technology

Although public discourse around artificial intelligence has particularly accelerated since the rollout of ChatGPT in late 2022, existing AI technologies have been deployed to great effect across the US economy (and globally) for decades. From internet search to financial services and from online retail to healthcare, so-called ‘weak’ AI technology has been put into practice across various industries. In 2023 alone, estimates for total revenue from AI technology reach as high as $136B USD. This total is expected to continue to grow rapidly in the coming years, with predictions for AI to surpass over $800B USD in global revenue by 2030.

What made the advent of ChatGPT so buzzworthy was that it was one of the first generative AI models to be successfully deployed at scale. While AI had already been around for many years, these prior commercial applications were only predictive AI models. Predictive models use machine learning techniques to train an artificial neural network (comprised of computerized nodes) to develop predictive functions (or algorithms) for predicting a given outcome based on a given input. Generative models, on the other hand, use their training inputs to inform the creation of content, whether it be generating text, images, audio, or even video.

While predictive AI models allow firms and individuals to make sense of (and predictions about) any data at their disposal, generative AI models go even further to allow firms and individuals to simulate a wide variety of creative and transformative processes. For example, current state-of-the-art generative AI is capable of transformations from text-to-image (DALL-E), text-to-3D (Dreamfusion), image-to-text (Flamingo), text-to-video (Phenaki), text-to-audio (AudioLM), text-to-text (ChatGPT), text-to-code (Codex), and more. Last year, these generative AI models were estimated to generate $3.7B USD in revenue, a figure that is predicted to grow rapidly over the coming years, reaching over $36B USD by 2028. In addition to revenue associated with the immediate sale of generative AI products, studies estimate that the productivity gains from generative AI could increase global economic production by the equivalent of $2.6 to $4.4 trillion USD.

Despite the fact that the generative AI market is relatively small compared to the pre-existing predictive AI market, what makes this nascent technology likely to be disruptive across industries is the customizability generative AI models afford. The prominent generative AI models of today (e.g., ChatGPT, LLAMA, BARD, CLAUDE) are known as ‘foundation’ models. These models are first extensively trained by developers to exhibit un-specialized generative abilities and can then subsequently be customized and adapted to unique personal or business applications. Using specialized training datasets or by customizing model parameters, foundation models can thus be optimized for a wide variety of specific use-cases. While all industry sectors are likely to be significantly impacted by generative AI in the long-run, studies indicate that most of the value created by generative AI lies in applications for customer operations, marketing and sales, software engineering, and research and development.

In order to develop high-performing foundation AI models, firms need massive amounts of data to train models on, extensive supercomputing infrastructure to train models within, and highly-skilled machine learning researchers to fine-tune model parameters. Although for small-scale projects there are free to download machine learning algorithms, public data sources, and relatively low-cost cloud computing for rent, model performance is generally optimized by a combination of massive (and high quality) training data and significant computational resources to run training iterations. Indeed the landscape of machine learning ‘startups’ today reflects the high cost of running a generative AI firm with the top eight AI startups each having raised over $100M USD and the largest of which (OpenAI) having raised over $10B USD.

For non-AI focused firms looking to adopt AI technology into their business, the existence of licensable foundation models allows firms to avoid the potentially prohibitive cost of developing their own models from scratch. Although this market is still in its early stages, such firms can already license a foundation model, rent cloud computing infrastructure (if necessary), and purchase data from third-party vendors (if necessary and not gathered through normal course of business).

Based on the differing market characteristics of the firms directly developing foundation models (faced with potentially prohibitive operating costs) and those firms merely adopting existing generative models (faced with the relatively low cost of licensing and leasing existing infrastructure), the following sections will discuss antitrust concerns associated with generative AI in these markets separately.

IV. Artificial Intelligence as a Tool for Anticompetitive Behavior

Many firms today are seeking to leverage generative AI across a variety of sectors, with some using generative AI to produce content-based products and others using these models to improve internal business processes. On the one hand, the licensing of generative AI models by such firms may serve as a stimulus to competition and lower barriers to entry. One possible mechanism for this is through increased labor productivity, with recent research showing that productivity gains from the use of generative AI are higher amongst new workers than they are for those with greater experience. Separate empirical research has corroborated this finding, showing generative AI outperformed human programmers in certain easy to medium coding challenges, but failed to match expert performance in more difficult tasks.

Boosting labor productivity, especially with disproportionate gains for new (and likely less productive) workers, may serve to eliminate one possible barrier to entry in talent-constrained industries – allowing a greater number of firms to access skilled labor and distributing best practices to a wider set of workers. Recent literature even shows that these productivity gains may lower technical barriers to performing research in economics and other computational social sciences.

On the other hand, generative AI may also pose a risk to competition in downstream markets that adopt these products. While the variety and scope of those markets make explicit acknowledgment of every possible anticompetitive harm difficult, there are two particularly noteworthy antitrust concerns that are relevant to the deployment of “fine-tuned” or applied models: (1) so-called “algorithmic collusion” (including price fixing, coordination, and monitoring of cartels or collusive agreements) and (2) price and behavioral discrimination. In discussing these potential areas for harm in the context of this article, I distinguish below between harms that are uniquely relevant to generative AI models as opposed to harms that may similarly arise from predictive AI models. Furthermore, it is important to distinguish between the differing circumstances within which algorithmic collusion and algorithmic price discrimination may arise. Collusion and monitoring of cartel agreements frequently requires transparency between competitors. This is in contrast to price discrimination which requires opacity to prevent competitors from stealing market share, consumers from diverting consumption or performing arbitrage, and enforcers from detecting anticompetitive conduct.

IV.A. Algorithmic Collusion and Price Fixing

Amongst the AI related concerns currently being considered by competition authorities is “algorithmic collusion,” which is the use of AI systems to facilitate tacit collusion and/or to monitor compliance with cartel agreements. The topic of algorithmic collusion, both in academic research and in policy discussions, predates the more recent growth of generative AI, but has now been given renewed importance by the rapid development of generative AI models. Current pricing algorithms use forms of predictive AI, but it is plausible that generative AI contributes to new capabilities in pricing strategy and allows a greater number of firms to utilize AI-based pricing algorithms. Specifically, third-party firms that specialize in price-setting strategy claim that generative AI can be used for personalized quotes, bids, and contracts; developing pricing communication materials; conducting analyses of competitor pricing; conducting customer feedback and analysis; and even allowing relatively untrained operators to utilize price-setting predictive AI tools.

The primary concern around algorithmic collusion is that automated, artificial intelligence algorithms are likely to be far more effective, efficient, and subtle than their human counterparts at finding and maintaining a collusive outcome. Although there is currently no evidence of algorithmic collusion occurring in real-world markets, the advancement of artificial intelligence technologies may enhance the ability of algorithms to (a) predict consumer behavior and demand, (b) monitor competitor behavior, and (c) curate and exchange information with competitors. There is already existing evidence of algorithms learning to collude on price and monitor compliance in the AI and economics literature. Even though this behavior has yet to be demonstrated in today’s markets, even very simple pricing algorithms have learned to conduct collusive strategies in research settings. Furthermore, the experimental literature discusses both collusion explicitly designed by algorithmic developers as well as the unintended development of tacit collusion by AI. In other words, an AI algorithm is capable of learning and executing algorithmic collusion even without explicit instruction from developers or even knowledge of such behavior by developers; if an algorithm is simply programmed to maximize profit, it may find collusion to be a particularly effective method of doing so.

The concept of algorithmic collusion predates the rise of generative AI models; the topic was popularized by the publication of Virtual Competition in 2016, which highlighted the use of predictive AI algorithms by prominent technology firms. Yet, although the risk of algorithmic collusion extends beyond generative AI, frontier generative AI models have additional capabilities that may exacerbate collusion risks that predictive AI models lack. For example, with generative AI models’ capabilities for content generation, coordination amongst multiple generative AI agents may be increasingly difficult to detect as steganographic techniques improve. Therefore, generative AI may allow for sophisticated and subtle communication between competing firms without any human input necessary for setting prices or facilitating transparency in information exchanges. Further, depending on the scope of deployment of fine-tuned generative AI models and the data collection agreements AI firms are able to impose on adopting firms, it is possible that AI firms (whether developing predictive or generative models) attain a wide sweeping view of consumer and producer behavior across many markets, which may pose a competition concern if such firms are also responsible for setting prices in those markets.

Despite these concerns, competition authorities have been clear that anticompetitive conduct such as price fixing is clearly illegal regardless of whether or not a human is directly involved in setting prices. As a result, such conduct is likely with the scope of existing antitrust laws, and the only outstanding question would be in determining liability for collusive behavior that is not initiated by human actors. Such forms of tacit collusion by algorithmic pricing models are likely not to be deemed illegal per se, and establishing intent by human actors may be a critical link in finding legal liability or anticompetitive conduct. But, in addition to this challenge, regulators may also benefit from adopting novel machine learning technologies themselves. Although algorithmic collusion is likely to be more difficult to detect than human-facilitated collusion, technology in both predictive and generative AI can also be applied as compliance tools for regulatory detection and enforcement.

Because algorithmic collusion and price fixing are likely to directly impact consumer surplus through increased prices and/or decreased supply or variety of products and services, such harm is directly relevant to consumer welfare. While there are interesting ethical and legal questions regarding the liability for harm from machine-led collusion, the harm that may arise from such conduct can plausibly be demonstrated empirically and is likely within the current scope of the consumer welfare standard. As generative AI is likely capable of increasing the ability of firms to enlist algorithmic pricing strategies, further empirical research on the proliferation of such price-setting algorithms may be useful to monitor the possibility of algorithmic collusion manifesting in real-world markets (as has been previously only demonstrated in experimental contexts). Past research on such topics indicates APIs for online marketplaces as a potential source of data for such research, which may be supplemented by (or substituted for) web-scraped data, depending on the availability of data sourced through such APIs.

IV.B. Personalization and Price Discrimination

One of the central debates surrounding AI (and “Big Tech” firms more broadly) concerns the collection and sale of personal information. Evaluating the current or future potential for harm caused by the sale or utilization of personal data is difficult for both regulators and academics alike. While on the one hand, the use of personalized data can be viewed as welfare-enhancing from a purely rational understanding of consumer behavior and revealed preferences, on the other hand, the use of detailed consumer information can represent an information asymmetry between firms and consumers entirely unaware of any targeted behavior. Underlying this tension between the procompetitive and anticompetitive effects of personalization is a conceptual debate regarding the assumptions embedded in rational economic models of human behavior.

IV.B.1. Rationality and Consumer Behavior

Under neoclassical economic theory, consumers are considered rational decision-makers, optimizing their individual welfare by purchasing products and services according to their unique preferences. This standard assumption of rationality is fundamental to the price theory-related efficiency arguments of the Chicago School of economics. Although a simplified omniscient and rational model of human behavior proves highly useful for modeling purposes, it loses much of the nuance observed in real-world human behavior.

Specifically, human rationality can be understood to be ‘bounded’ by a number of constraints. One of these constraints is that individuals are information-constrained (past information is both incomplete and biased) and often face information asymmetries. Another is that individuals are computationally constrained and do not have unlimited resources and time to optimally process past information. The combination of information and computational constraints can result in alternative models for consumer behavior as ‘satisficers,’ pursuing threshold satisfaction across their preferences as opposed to optimizing with perfect information and rationality.

A fundamental point of distinction is whether or not the revealed preference of consumer purchasing decisions is truly rational (even if not omniscient) or if consumer decision-making reveals systematic biases that may be exploited. On this point, interdisciplinary research in psychology and economics has demonstrated various heuristics and systematic biases in human decision-making, such as the representativeness heuristic, anchoring, future discounting, loss aversion and recency bias. Although these alternative models of human behavior do not fundamentally replace the core economic models based on rationality assumptions, they do shed light on vulnerabilities that may be exploited in consumer decision-making.

As the continued advancement of artificial intelligence increases the ability of producers to acquire and understand consumer data, the degree of information asymmetry faced by consumers will likely increase as well. Targeted analysis of consumer data has indeed already found a viable market, with many major technology firms hiring behavioral economists and behavioral scientists, and new industries seeking to employ the tools of behavioral economics as well. Just as the findings of behavioral economics can be used in public policy to ‘nudge’ the general population towards investing for the future, they can also be used by firms to nudge consumers towards increased engagement, service subscriptions, and higher-priced goods.

As to whether or not firms having greater knowledge of consumer preferences and biases is actually welfare-enhancing, answers will vary depending on the surplus and welfare relationship as discussed in Section II. If individual decisions and preferences should be unwaveringly respected, then personalization and behavioral marketing are likely non-problematic – strategic marketing or sales policies may simply be understood as reducing search costs and uncertainty for consumers. Alternatively, if individuals are understood to be capable of making the ‘wrong’ choices for themselves (as an extreme example: addiction), then knowledge of consumer preferences can lead to supposed increases in surplus and efficiency that do not actually correspond with further increases in welfare. What follows is fundamentally a policy question on how paternalistic the government should be and, relatedly, whether that policy question is best answered in the realm of antitrust or is better served by other forms of consumer protection. Further, even if one assumes bounded rationality may result in inconsistent choices for consumers, the challenge arises as to how to effectively measure the welfare of those consumers and what policy may serve to be most welfare-enhancing.

IV.B.2. Methods of Personalization and Discrimination

As discussed in greater detail below, the increasing amount of detailed information tracked and analyzed about individual consumers gives firms more ability to pursue individualized pricing (or near-perfect price discrimination) and to use personalized marketing to stimulate demand (through so-called ‘behavioral discrimination’). While these challenges certainly relate to predictive AI as well, generative AI is likely to compound such concerns due to the ability of firms to illicit detailed long-form consumer information and generate personalized marketing materials, product recommendations, and pricing strategies. Indeed, studies have found that the utilization of generative AI, particularly LLMs, improves the persuasive power of consumer messaging and allows such personalization to be scalable.

As discussed in Section IV.B.1, targeted marketing (and marketing more generally) is plausibly considered procompetitive to the extent that the effect of that marketing is to increase consumer awareness of product substitutes and quality features. Although the impact of advertising may be to ‘stimulate demand’ through increasing consumer willingness to pay for a given product, such a shift in demand may be entirely a result of increased quality perception of a given brand or greater certainty that such a product will fulfill a consumer’s preferences. However, to the extent that advertising misleads consumers about the expected utility of a given product, such forms of advertising may be considered harmful, taking advantage of behavioral biases and the constraints consumers face in decision-making.

Indeed, various studies have covered the forms of “buyer’s remorse” consumers may experience (a phenomenon that occurs frequently with online shopping). Despite this reality, the current literature on consumer regret indicates that consumers tend to learn from these experiences (regretful purchases frequently result in consumer motivation to switch brands), and to the extent that advertising is clearly misleading, preventing deceptive trade practices has viable enforcement options outside of the scope of antitrust.

Separately from influencing consumer willingness to pay, behavioral-based marketing may serve to differentiate products and lessen competition. Although there is certainly evidence to support the pro-competitive effects of advertising discussed above (through increasing awareness of substitutes and effectively increasing elasticity of demand), separate research points to other cases where future or present price elasticity of demand may actually decrease due to advertising. The literature indicates that in markets where the advertising firm has a significant market share, is well known, or already preferred by a given consumer, the pro-competitive effect of non-price advertising (increasing consumer knowledge of substitutes and quality) tends to be outweighed by differentiation.

Although the role of generative AI in digital marketing is still nascent, academic and industry research indicates a great likelihood for generative AI to be used at scale for personalized (or so-called ‘hyperpersonalized’) advertising. To the extent that firms are effectively able to use generative AI in advertising to differentiate their products and afford themselves a degree of market power, they may subsequently use their detailed knowledge of individual consumers for another form of discrimination: price discrimination. Until recently in the United States, regulation around price discrimination has been relatively limited due to arguments that price discrimination can be welfare-enhancing for consumers and lead to greater market efficiencies. However, regulatory focus on price discrimination has increased after the revival of the Robinson-Patman Act in 2021. While the litigation brought by the FTC under the Robinson-Patman Act primarily concerns commodity goods in limited contexts, this recent trend reflects a greater willingness of regulators to investigate price discrimination in other contexts (even though price discrimination is frequently presumed to be lawful).

Firms are likely unable to profitability price discriminate without detailed consumer information and the ability to prevent consumers from switching to lower-priced alternatives (in other words, market power). However, as discussed above, generative AI models applied by firms in today’s data-intensive economy may circumvent these limitations. Increasing knowledge about individual consumers and advancements in computational capabilities will likely allow firms to increasingly understand granular consumer preferences and to use that information for consumer persuasion. Through the maintenance of detailed consumer information combined with varying degrees of differentiation afforded through personalized advertising, such fine-tuned, personalized pricing becomes possible.

Even in instances where consumers have higher demand elasticity and would consider switching to competitor products based on price, observed switching behavior may be infrequent due to information constraints in certain markets. Although consumers can theoretically always access rival firm pricing information in today’s digital economy, platform-based firms may provide a ‘walled garden’ effect where information on outside competitor pricing is opaque. Further, across a variety of markets, consumers are faced with so-called “dynamic pricing” in which short-term changes in demand can be reflected in real-time changes in consumer prices. In such an environment with variable prices, consumers may be unable to compare current prices to any given list price, making comparison shopping more difficult for consumers, and increasing the ability of firms to employ price discrimination. Although consumers can theoretically combat price discrimination with reverse-engineered algorithmic pricing programs themselves, the adoption of these preventative solutions today may be rather rare in practice. Additionally, the cost consumers incur to inspect for price discrimination may represent a deadweight loss for social welfare.

A key point of discussion on price discrimination is whether it results in greater surplus for consumers through allowing scale efficiencies or if the subsidization of consumers with lower willingness to pay comes at the cost of reducing consumer surplus for consumers with higher willingness to pay. Due to network effects and economies of scale discussed in the following section, firms may use price discrimination as a tool to subsidize consumers with low willingness to pay in order to attain greater scale (or to prevent competitors from gaining sufficient scale).

Optimistic perspectives on personalized pricing (or so-called “progressive” pricing), suggest that scaling prices based on consumer willingness to pay is likely to increase surplus for both producers and consumers by lowering average marginal cost through scale efficiencies:

Figure 1: “Progressive” Pricing

Figure 1: “Progressive” Pricing

Sources and notes:

p*: The price chosen by the firm to maximize profit without price-discrimination. In a competitive market, marginal cost is equal to p* and the firm is a price taker. If this firm has market power, profit maximizing p* may be set greater than marginal cost.

q*: The quantity demanded by consumers given price p* and willingness to pay WTP (the demand curve).

P: The price curve under price discrimination. Note that all values on price curve P are below p* (the non-discriminatory price).

MC: The marginal cost under price discrimination, driven lower due to economies of scale.

Figure based on Exhibit 1 included in the following article from the Boston Consulting Group in 2019: Jean-Manuel Izaret & Just Schurmann, Why Progressive Pricing Is Becoming a Competitive Necessity, BCG (Jan. 17, 2019), https://www.bcg.com/publications/2019/why-progressive-pricing-becoming-competitive-necessity (last visited Apr. 9, 2024).

However, there are a number of potential outcomes in which this welfare-enhancing effect might be incorrect or overstated. One must consider a firm's incentive for setting prices in the manner shown above and, specifically, the availability of alternatives at competing prices. The rationale behind pursuing price discrimination for this firm is that it can increase profit by lowering the marginal cost of goods sold through greater scale and personalized pricing. Once a firm attains greater scale, one must reconsider such a firm’s competitive landscape. Is this firm faced with identical competitors as when it produced q*, or is this firm facing similarly strategic competitors? There are three exemplary scenarios to consider under such “progressive pricing”: (1) Competing firms remain operational but do not similarly price discriminate, (2) price discrimination leads to increased market share that forces other competitors out of business, and (3) competing firms remain operational and similarly price discriminate. In the first case, a profit-maximizing firm would only be incentivized to price below p* based on consumer willingness to pay, and all consumers who previously paid p* would continue to pay this price:

Figure 2: “Progressive” Pricing vs. Non-Price Discriminating Firms

Figure 2: “Progressive” Pricing vs. Non-Price Discriminating Firms

Sources and notes:

To the left of q*, the firm conducting progressive pricing competes with other firms and maximizes profit at equilibrium price P. To the right of q*, the firm conducting progressive pricing does not face competition from other firms and maximizes profit by charging at consumer WTP.

Any prices set above p* would not be accepted by consumers given the availability of competing products for consumers at that price range. And so, the profit maximizing firm would not be able set prices at the willingness to pay of the consumers with the highest demand (to the left of q*), leaving consumer surplus unchanged for this group. However, consumers with lower demand (to the right of q*) were not previously served in this market and therefore will tolerate prices up to just below their willingness to pay. Although these consumers receive the desired product, consumer surplus does not actually increase (or will increase insignificantly) given that these consumers are indifferent to purchasing the product at the offered price. Hence, in scenario (1), it is possible that consumer surplus does not increase at all and any additional welfare gains accrue as producer surplus.

Next, consider scenario (2) where only one firm can profitably attain scale to cater to lower willingness to pay consumers. As discussed in detail later in this article, such a scenario that results in a ‘natural monopoly’ may be more likely to occur in markets with specific factors such as high switching costs or barriers to entry. It is possible that this scenario may dampen competition in a non-linear fashion; after initially increasing surplus for all consumers through ‘progressive’ pricing, such a firm may increase its scale and market concentration to the point that other competitors lose out on similar economies of scale. After using consumers with higher willingness to pay to subsidize consumers with lower willingness to pay, such a firm may attain sufficient scale to meet all consumer demand more efficiently than its competitors. If such activity forces other competitors to go out of business, such price discrimination may segue into a form of predatory pricing. The resulting competitive landscape (or lack thereof) allows the firm to charge supracompetitive prices across all consumers, barring any instances where the firm must react to the threat of entry from rivals.

Figure 3: “Progressive” Pricing with Monopoly Power

Figure 3: “Progressive” Pricing with Monopoly Power

Sources and notes:

By serving the whole market without competition from other firms, this firm maximizes profit by charging at consumer WTP for all consumers.

By forcing competitors without scale out of the market, a “progressive” pricing firm in scenario (2) can effectively charge at each individual consumer’s willingness to pay. From an efficiency perspective, monopolistic first-degree price discrimination, as shown here, is welfare-enhancing. However, price discrimination in this manner also results in a distributional effect, maximizing surplus for producers and eliminating surplus for consumers.

Finally, consider scenario (3), in which many firms conduct progressive pricing in the manner shown in Figure 1. There are a number of circumstances in which this scenario is unlikely to occur, such as markets where consumers can use arbitrage (to avoid paying supracompetitive prices) or markets where products are not differentiated (as pricing discrimination requires some degree of market power to prevent competitors from stealing market share with discounting). Although theoretically, it is possible for this scenario to occur, real-world data from pricing experiments indicates that welfare-enhancing effects from pricing discrimination are not shared equally by all consumers, even if some individuals benefit. A 2017 study analyzing the welfare impacts of machine learning-generated personalized pricing found that although 60 percent of consumers benefited from personalized pricing and firm profitability increased by 55 percent, total consumer surplus actually declined by 23 percent. While it is yet unclear how the addition of generative AI technology into personalized pricing strategies will impact welfare, this experimental evidence indicates that the optimistic versions of so-called ‘progressive’ pricing do not accurately depict welfare losses, particularly for consumers with higher willingness to pay.

Non-obvious forms of price discrimination or personalization are likely to be even more difficult to regulate. For example, diverting consumers to higher-priced products through targeted advertising or promotional programs may achieve the same effect as personalized pricing and would be much more difficult to prove empirically, even though those consumers technically do have access to lower-priced products. Price discrimination may be especially inefficient and potentially harmful to consumers when willingness to pay depends in part on product mis-perceptions. Another strategy firms may employ is ‘price skimming’ in which firms set prices initially high to target consumers with higher willingness to pay before gradually lowering prices to meet further demand.

Although there are plausible mechanisms for personalized pricing to reduce consumer surplus, price discrimination is considered legal in many contexts. Price discrimination thus may be understood more as a symptom of market power rather than a cause of it per se. The primary role generative AI is likely to play in this sense is to increase the efficacy of personalized advertisement and marketing, affording the market power to then pursue profit through price discrimination. The exact extent to which marketing and customer persuasion are considered to lessen competition depends on certain market conditions and assumptions regarding the limits of consumer rationality. Personalized advertising by well-known brands or in markets with greater concentration may serve to lessen competition, unlike advertising that increases consumer awareness of substitute products.

Still, whether or not that conduct (the applied knowledge of consumer preferences in advertising) is problematic depends on value judgments and the core relationship between consumer surplus and consumer welfare. Indeed, if individual biases are to be equated with revealed preference as a socially optimal outcome, then engaging in addictive behavior would consequently be considered socially optimal as well. Revealed preference is a powerful and informative concept, but relaxing our assumptions around consumer rationality opens up additional pathways toward consumer harm that may not be accounted for by the traditional conception of consumer welfare.

As the information asymmetry between consumers and producers grows along with the deployment of continually more advanced machine learning models, US regulators need to be equipped to understand how firms can use those models to dampen competition, increase market share, and simulate demand. To the extent that antitrust standards do not account for consumer biases or bounded rationality, other regulatory avenues such as consumer protection may fill this void, particularly as such concerns arise in contexts such as discriminatory insurance premiums based on predisposition for health conditions. Regulation of the use of data and personalization may also extend into labor policy discussions as employers may move towards greater personalization in salary and compensation.

To elucidate the role of generative AI in personalized marketing, future research on the relationship between marketing and the price elasticity may be tailored specifically towards advertising that is the product of generative models. Furthermore, research on real-world dynamic pricing may demonstrate the extent to which consumers with higher willingness to pay actually pay higher prices for the same products or services through strategic behavior by firms, such as price skimming. An additional area of research may be to explore the relationship of advertising and consumer acquisition spend to product quality and price. If firms seek differentiation amongst specific consumers with high willingness to pay, they may be willing to incur greater costs in customer acquisition that do not directly translate to increases in product quality and ‘competition on the merits.’

V. Competition Concerns in the Market for Generative Artificial Intelligence

While there are evident challenges to address in regulating competition in downstream markets using emerging generative AI technology, there are also potential challenges to address in the upstream market (developers of generative AI products). These challenges may prove to be even more elusive under the consumer welfare standard than those discussed previously in this article. As I will discuss below, the market for generative AI has a number of characteristics that increase the likelihood of market concentration.

Additionally, although firms in this market are likely to gain efficiencies from scale, there are also a number of potential harms that may stem from any resulting market concentration. This dynamic highlights an important balance in antitrust policy between incentivizing rent-seeking firms to engage in productive innovation today, but not allowing those rents to endure and thus distort the landscape of competition tomorrow.

V.A. Precursors of Concentration in the Market for Generative AI

The development of generative AI models primarily depends on three key inputs: data, computational resources, and high-skill labor.

In today’s virtual economies, data is constantly being recorded and stored in the hopes of future monetization or sale to third-party data vendors. There are many methods in which firms may acquire data, such as through information collected in the normal course of business, web scraping, offering a new service, hiring people (collecting data through Mechanical Turk), purchasing data, accessing public or government data, or even using computer-generated data. Data is a non-rivalrous good, meaning that one firms’ access to data does not exhaust that data’s usage; it can subsequently be shared or sold to other firms in an industry without diminishing the utility for the first firm. However, if a firm gains a competitive advantage through differential access to data and owns the rights to a given proprietary source, data is likely to be treated as an excludable good, with rivals prevented from accessing it.

Furthermore, just because data is plentiful and used by a variety of firms in different contexts, this does not mean that all data is made equal. Specifically, data is differentiated by a number of factors collectively referred to as the “Five V’s of Big Data:” volume (amount of data), velocity (the speed at which data is collected or delivered), veracity (the accuracy or reliability of data), variety (the different types of data recorded), and value (the ability of data to be transferred into a valuable monetary resource). Not only are firms with greater scale far better positioned to record a large volume of data at more frequent intervals of a greater variety than their smaller rivals, but also, those firms with greater scale are likely to develop greater expertise in identifying data with more veracity and value. These factors make it likely that data serves as a barrier to entry in competition for the development of generative AI models. Current litigation surrounding the use of copyrighted materials in the training of generative AI models (regardless of whether or not these claims will be resolved as fair use), confirms the extent to which firms will go to avoid paying the otherwise prohibitive costs of purchasing data outright or developing data pipelines of sufficient quality and scale to compete in today’s generative AI market.

Generally speaking, in order to optimally increase the performance of a given AI model, one must increase the volume of training inputs (data) and increase the number of training iterations (computation). Indeed, the numbers from today’s most prominent generative AI model, ChatGPT of OpenAI, corroborate this reality: it costs close to $1 million USD per day to run ChatGPT-3. The upfront cost of acquiring training data, combined with the recurring cost of maintaining data infrastructure and running the AI models may be prohibitive for many firms, requiring significant scale before a generative AI firm can attain a marketable and profitable product. Indeed, the top 8 AI startups have each raised over $100M USD in their attempt to vie for the current and future markets of generative AI.

Although a robust data pipeline and significant computing power are vital to the success of any generative AI model, the scarcest resource may prove to be labor to develop and fine-tune the AI model. Talent is a particularly challenging barrier for new firms to overcome given that not only are firms with greater scale more capable of recruiting top-level AI talent through acquisition or by offering higher salary and/or reputational advantages than smaller firms, but also the supply of talented machine learning engineers cannot quickly respond to changes in demand; engineers have to go through approximately a decade of post-secondary schooling to attain a PhD in computer science, machine learning, or data science.

Furthermore, as the advancement of AI models continues to improve the labor productivity of programmers and computer science engineers, it is possible that these returns will be concentrated amongst those firms at the forefront of developing generative AI models. Even with the current state of generative AI technology, analyses of customer support agents have shown that labor productivity increases from 14 to as high as 34 percent while using generative AI. Although these copilot and coding assistant technologies can, on the one hand, flatten the distribution of labor productivity – with the greatest proportional benefit accruing for novices – on the other hand, skilled practitioner guidance is likely necessary to minimize risk and maximize productivity.

As the development of so-called weak AI models (those with less generalized intelligence) gives way to strong AI models (those with similar or greater generalized intelligence to humans), these models can possibly reach a point of intelligence where self-improvement is possible with lesser and lesser human intervention. Depending on the cost of computation, it is possible that such improvements will be far cheaper than hiring additional human machine learning engineers. Similar to data, such improvements can be considered non-rivalrous, but excludable in that all such productivity improvements can theoretically be shared by rival firms, but that a firm owning a given generative AI model can delay access to or prevent other firms from using proprietary versions of coding assistants. While in an optimistic scenario, the ability for an AI model to improve its model parameters, training and validation processes, and data acquisition would allow for widespread productivity gains, it is also possible that such gains are most concentrated amongst only a few leading models.

In addition to limitations in accessing data, compute, and labor, it is possible that the regulation of generative AI models will serve as an additional fixed cost and barrier to entry for new firms and benefit incumbents. Specifically, given the disruptive nature of generative AI technology and far-reaching impacts for national security, the legislative and executive branches of the US government have initiated a number of efforts to manage risk, regulate, and monitor frontier AI development. While such efforts are likely helpful from a national security and even existential risk standpoint, they also contradict the rather long-standing emphasis on open-source in the computer science and AI community. Open source can certainly serve as a buttress to competition in AI markets, but regulation may curtail these competitive features due to overriding policy concerns (or firms may even revert to proprietary ownership after attaining sufficient efficacy with their open-source models).

In summary, the current market for generative AI appears to show dynamism, with a number of firms vying to develop their own generative models, but such competition may be diminished as the market for generative AI models matures. Although technology firms and venture capitalists alike are currently placing their bets on which models are most likely to succeed in tomorrow’s markets for generative AI (whether it be Microsoft and OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, or other models), it is possible that only one or very few firms profitably attain scale due to the constraints on labor, data, computation, and regulation outlined above. Even today, only six firms (with each developing multiple models) have been able to bring to market cutting-edge generative AI models, overcoming the extremely high cost of computation and frequently acquiring multiple startups in order to meet talent needs.

Due to the high start-up costs these firms face and the low or even near zero marginal cost of each additional user using the software for a given generative AI model, it may be likely that the market for generative AI operates with economies of scale. Such economies of scale (and their correspondingly high market concentrations) present a challenge to regulators in that efficiencies of scale are frequently surplus-enhancing from the perspective of the consumer welfare standard, but also correspond with an increase in market concentration that is distinctly opposed to the competitive conditions standard. In order to understand the procompetitive and anticompetitive implications of scale in the market for generative AI, I provide below an overview on economies of scale and their procompetitive implications, as well as potential harms and anticompetitive effects that may arise from such ‘natural monopolies.’

V.B. Understanding Economies of Scale

In traditional economic thought, there is no long-term economic profit in perfectly competitive markets. Without market power, firms are considered price takers for whom price is driven down to the level of cost due to competition from other suppliers. Assuming firms can, in the long run, switch between markets to those with higher returns, even if a market initially does not have many competitors, eventually firms switching into more profitable markets causes prices to reach equilibrium with zero long-run profit. Even if firms have significant market share, they can still be considered price takers so long as fear of entry incentivizes them to keep prices down. Competition authorities in Europe seek to mandate such price competition through regulation of so-called ‘excessive pricing,’ whereas US antitrust authorities do not.

In practice, it is rare for markets to function in a perfectly competitive manner such that price is equivalent to cost. Firms rationally avoid commoditization and seek economic rents, differentiating their products to appeal to various consumer preferences, marketing heavily, and investing in new technologies to reap the benefit of intellectual property protections. In markets where producers differentiate themselves and there are many sellers, economists refer to this imperfect competition as monopolistic competition.

While short-term profits can be seen under monopolistic competition, long-run profits may result under oligopolies or monopolies (markets with one or very few firms). The existence of such economic rents for oligopolies or monopolies is afforded through barriers to entry (preventing other firms from entering the market). These barriers to entry can come from control over a key resource, government protections, or ‘natural’ barriers to entry such as economies of scale that lead markets to tend towards natural monopoly. Even without control over a key resource, a natural monopoly is similarly protected from price competition through the lack of entry by rivals. In a true natural monopoly, the incumbent firm is only able to generate long-run profit due to its sufficient scale; the entrance of any additional firms would constrain the scale of both the incumbent and the entrants, thus preventing the entrant from earning a profit.

Whether or not a market exhibits classical economies of scale and tends towards natural monopoly depends on the magnitude of the barriers to entry and the marginal cost of production relative to the total quantity demanded in the market. If the barriers to entry are high enough such that the scale required to minimize costs is equal to or larger than available consumer demand, only one firm is capable of profitably attaining scale, resulting in a natural monopoly. Traditional examples of natural monopoly include public utilities where the cost of one firm building infrastructure is so prohibitively expensive that the cost can only be offset by a firm’s ability to subsequently earn monopoly profit. Given that the marginal cost of serving additional customers is so low, goods and services sold by various natural monopolies can be considered non-rivalrous, but excludable. The analogy used by plaintiffs in some technology competition cases involving network effects is of a ‘gatekeeper’ firm charging a toll to others and discouraging conduct that would lead to rival ‘bridges.’

Assessing whether barriers to entry are caused by pro-competitive efficiencies or anti-competitive behavior, and relatedly whether or not the resulting profits indicate long-run positive economic returns, poses a difficult challenge to US regulators. US firms often invest heavily in fixed costs such as labor and technology infrastructure for many years prior to attaining profitability, making it hard to assess what the real profit margin is on present day goods. The risk of these investments must also be considered given there is a probability that a firm’s (often) billions of dollars in research and development do not result in prolonged future margins to recoup lost profit. If these “positive economic returns” or sustained profit do not approach zero in the long run, US antitrust authorities can investigate whether or not barriers to entry are caused by anticompetitive conduct. However, as discussed above, non-anticompetitive behavior can also facilitate natural monopoly. With economies of scale, the high market share of a firm may be entirely efficiency-based, and it is possible that no new entrants would rationally invest to the minimum scale needed to become profitable. To the extent that a firm does act as a ‘gatekeeper,’ defendants may claim that restraints of trade by the dominant firm are integral to maintaining service quality and maximizing output. Although the threat of entry may exist, the inability of entrants to profitably attain scale allows incumbent firms to retain profits proportional to this constrained threat of entry.

There are various forms of scale effects that can result in such ‘natural monopoly.’ These include (1) classical supply side returns to scale in which average costs decrease with greater production volume, (2) demand side returns to scale, such as consumers opting to use a given platform for network effects, and (3) learning-by-doing in which improvements in quality and lowering cost are attained by prior business and subject matter experience. It is likely that all three of these forms of economies of scale apply to varying degrees in the market for generative AI models.

Generally speaking, the high costs associated with acquiring data, running training iterations, and hiring machine learning engineers to fine-tune model parameters are fixed costs needed to create an effective generative model before selling it to businesses or consumers. The marginal costs associated with the actual sale of the generative model (whether on a per query or subscription basis) are almost entirely driven by the cost of compute for those additional training iterations – meaning that increasing scale tends to drive down the average cost of production, resulting in classical supply-side returns to scale. It is also likely that network effects cause demand-side returns to scale in the market for generative AI. As more users use a given generative AI platform, those users contribute to a greater understanding of optimal “prompt engineering” that can be shared with other users and model developers can separately use data from those customer interactions to run additional training iterations that improve model parameters (so-called “data network effects”). Finally, those AI firms that bring a successful generative AI product to market are likely to learn from those experiences in a manner that improves their ability to bring successive, superior products to market.

If a barrier to entry is efficiency-based, under the consumer welfare standard, there is an unclear path to demonstrate harm to consumers. An incumbent firm operating at scale simply provides preferable goods to consumers at lower prices than competitors without scale and, as a result, wins increasing market share. Even if such firms are able to charge supracompetitive prices (or above long-term average cost), analyses of what prices should be under a natural monopoly (without the EU equivalent of “excessive pricing”) are unlikely to take effect, given that the US antitrust regime prioritizes minimizing the cost to consumers from erroneous enforcement decisions. However, as I will elaborate in greater detail in the following section, such a view may fail to take account of other forms of harm or economic rents that may arise once firms in the market for generative AI attain sufficient market share, as well as the inefficiencies caused by the costs of maintaining a monopoly.

V.C. Theory of Harm Despite Economies of Scale

Although there are certain factors (as discussed in the preceding sections) that indicate market concentration may be likely as the market for generative AI matures, it is alternatively possible that many generative AI products will profitably attain scale – whether it be Meta’s LLaMA, Google’s Gemini, Anthropic’s Claude, OpenAI’s ChatGPT, or other startup models yet to arise. Such a scenario may be more likely to occur if such products differentiate themselves to different business needs (such as specialization within a given use case or prioritizing business-to-business applications vs. business-to-consumer) or consumer preferences (such as branding related to product safety in relation to data privacy or reduction in ‘jailbreaking’ risk).

However, no foundation generative AI models have yet to reach this point of profitable scale. Some even currently report losses as large as $500M annually, with an expectation that these losses will continue to grow before technology firms are eventually able to turn a profit on generative AI. Hence, in the alternative scenario (where the aforementioned barriers to entry tend towards natural monopoly), the dominance of such a firm introduces the risk of anticompetitive conduct and other externalized harm, even if this market share was initially won through greater efficiencies.

In this section I will discuss three types of harm that may arise from such market concentration itself: (1) Increased risk of abuse of dominance and the use of anticompetitive conduct to maintain market share, (2) reduced incentive to innovate, maintain product quality, or maintain competitive prices for firms who have ‘tipped’ their respective market, and (3) externalities of market power.

V.C.1. Abuse of Dominance to Maintain Market Share

Even if a firm initially wins market share from competitors through greater efficiencies or providing greater value to consumers, incumbency status may better position such a firm to deter future competition. Such unfair methods of competition may take a variety of forms, including bundling and tying, exclusive dealing or partnerships, self-preferencing, and acquisitions to stifle competition. This behavior may be particularly problematic when a dominant firm leverages its high market share in one market to influence the sale of its products or services in another market.

As an illustrative example, firms with existing market share in one technology market, whether it be a generative AI product or an earlier technology, may bundle the sale of (or simply set as the default) other complementary products such as cloud computing resources, image or audio-based generative AI products, or predictive AI products. In addition to these more traditional methods of anticompetitive conduct, future incumbent firms in generative AI (and advanced technology products more broadly) have access to new methods of disadvantaging rivals that may be more difficult to adjudicate with traditional antitrust analysis. For example, firms may deter innovation from rivals by reducing interoperability with rival products and even alter consumer expectations about the value of rival products by introducing new products or marketing. An additional emergent risk for deterring entry by rivals is the possibility for incumbent firms to simply hire the entirety of a rival’s labor force to disable the threat of competition. Such a risk is especially relevant to the market for generative AI given the aforementioned importance of talent in the development and maintenance of frontier AI models. If such strategies to maintain a firm’s incumbency advantage and deter entry are costly to implement, this conduct would be considered a welfare cost of monopoly – diminishing the efficiencies that led to natural monopoly in the first place.

Furthermore, firms may use methods of intertemporal price discrimination to impose switching costs. By offering new customers discounted prices and raising prices for existing consumers, firms can effectively target their most price-sensitive consumers and prevent competitors from profitably gaining market share. The ability of firms to impose such switching costs and achieve customer ‘lock-in’ stems from the inability of consumers to fully anticipate the future impact of switching costs and consumers underestimating the extent to which they are willing to search for competing products in the future. Such consumer ‘lock-in’ effects benefit incumbents and harm rivals. These effects may also incentivize producers to price well-below cost, competing vigorously in customer acquisition before subsequently raising prices on ‘locked-in’ consumers, a tactic that may be considered a form of predatory pricing.

What makes this behavior even more challenging to assess for regulators is that firms may choose not only to pursue future profits through increased prices, but also through lower quality standards and increased margins through cost cutting. In the market for generative AI, these changes in product quality may be particularly difficult for regulators or even consumers to observe in some instances, such as the unregulated sale of consumer data to third parties. It may be additionally challenging for regulators to decipher the competitive implications of customer ‘lock-in’ when firms compete in multiple markets. For example, if generative AI firms offer below cost business-to-consumer products, these firms may afford themselves economic rent in the sale of business-to-business AI products (assuming such products seek to market the advantages of a highly activity base of end-consumers).

In the US, market power demonstrated through prices that are systematically higher than costs is not a violation of antitrust law, but rather such firms must have behaved illegally in the pursuit of such market power. Although harms from behavior such as bundling, tying, and adversarial acquisitions may be found illegal and preventable through existing antitrust law and the consumer welfare standard, disadvantaging rivals through increased switching costs may not, given the difficulty in differentiating between short-term economic rents won through greater innovation and long term economic rents won through disadvantaging rivals.

Generally speaking, the US consumer welfare regime has not recognized predatory pricing as a viable practice for firms, and Chicago School economists often contend that predatory pricing is irrational due to the inability of firms to recoup lost profits caused by alleged predatory pricing. However, given that generative AI firms – and many firms in the digital technology sector – are willing to incur massive operating losses to attain sufficient scale and profitability, it is possible that ‘competition for the market’ (as discussed in greater detail below) incentivizes a form of predatory pricing that is not accounted for in a short-term analysis of consumer welfare.

V.C.2. Reduced Competitive Incentive to Innovate

Economists and European competition authorities have used the concept of ‘market tipping’ to describe the tendency towards natural monopoly that arises when a given firm attains sufficient market share over rivals, a phenomenon that is particularly common in competition between rival systems of integrated product and service offerings. To the extent that long-term economic rents are afforded to incumbent firms in markets that tend towards natural monopoly, additional harm to consumers may arise through facing less competitive pressure to innovate, improve quality, or lower prices.

As generative AI firms gain greater economies of scale (as discussed above) and as these AI products become integrated amongst various existing search engines, chatbots, and other web-interface systems, this may lead to market tipping in which a marginal, initial competitive edge results in one firm substantially outperforming rivals (in a manner that is not proportional to continued innovation and investment). Regulation of natural monopoly with this concept of market tipping may be difficult due to a combination of short-term welfare-enhancing effects and the possibility of long-term welfare harm. Specifically, prior to market ‘tipping,’ consumers are likely to experience positive surplus increases from network effects and economies of scale, but the incentive to improve upon these products or charge lower prices may be reduced in the long term.

Even without “problematic” anti-competitive conduct by the incumbent firm, it is possible that network effects, switching costs, information asymmetries and behavioral biases hinder markets from working properly. In some markets with scale economies, only one firm is capable of earning positive profits at a given point in time, leading to so-called “competition for the market.” In such a scenario, although competition may exist if a competitor threatens to overtake the entire market, if the incumbent firm is difficult to replace then competition concerns may be reflected in reduced innovation, lower quality, and higher prices than may exist in a but-for world. Factors that make this incumbency advantage more persistent include the offering of free essential services, aforementioned network effects, the capability for data-enabled learning, and the prevalence of single homing (whether due to consumer homogeneity or lack of product differentiation).

After such market tipping occurs, the question arises as to how innovation may differ for the incumbent firm in terms of both the degree of innovation and types of innovation relative to a competitive equilibrium. As for degree, some academic literature supports the notion that incumbent firms invest less into innovation than challenger firms, and also that challenger firms are more likely to pursue more disruptive and higher value innovations than incumbents. Other research seeks to identify the types of innovation that a monopolist firm (in this case, a natural monopolist) would be incentivized to undertake. The literature indicates that an incumbent firm with market power is incentivized to engage in process innovations (i.e., increasing margins through reducing production costs), but that the threat of entry may reduce an incumbent firm’s incentive to pursue product innovations.

Regardless of the types and degree of innovation pursued (even if across both product and process), it is important to consider the incentive for incumbent firms to first innovate and then pass on the benefits of innovation to consumers. Even amongst process innovations, which would theoretically increase total surplus through reduced costs, some research shows that consumer surplus may be reduced if monopolist firms constrain output. Further, in order to quell innovation, firms may identify and acquire startups before they are able to attain sufficient scale to challenge the incumbent. In the case of so-called ‘killer acquisitions’ where firms purchase another company for the sake of decommissioning a competing product, consumers receive none of the benefits of the more dynamic innovation typically undertaken by startups. Innovation under monopoly may still be incentivized, but the returns on innovation may serve to simply replace rents for the incumbent firm rather than get passed on to consumers. Despite the possibility of reduced incentive to innovate under economies of scale, empirically demonstrating how this incentive translates to consumer harm is likely to be very difficult because of the countervailing efficiencies that result in market tipping in the first place.

However, there are a number of factors that may mitigate this tipping effect and thus reduce the impact of market power on discouraging future innovation. These include multi-homing – the use of multiple rival services so that they are forced to compete on cost and that no single service has unrivaled access to consumer data – and cases where consumers have a preference for heterogeneity. While it is possible that consumers of generative AI products may continue to multi-home as the market matures, separate research has found that the increased prevalence of artificial intelligence-based firms and technology platforms has corresponded with an increase in economic rents, a trend which may continue with the advancement of generative AI products.

Alternatively, regulation around markets that may tend towards natural monopoly or ‘market tipping’ may take a variety of forms. Consumer protection can plausibly address issues around information asymmetries between firms and consumers (that facilitate higher switching costs) and mandating the sharing of data or interoperability of competing products may be sufficient to address factors that tend towards concentration. Under a scenario where firms charge consumers increased prices – or fail to improve quality and reduce costs – based on market power that is facilitated through natural economies of scale, regulators face a tough challenge as to how to properly adjudicate where consumer harm arises without the EU concept of ‘excessive pricing’ (even though long-run margins may not converge to zero).

However, direct forms of price regulation may now be possible to enact effectively due to increasing sophistication in data analytics – though such policies are likely to distort industry incentives. The issue is that such a policy would run a significant risk of losing out on the efficiencies that made market tipping possible in the first place. In a market that ‘naturally’ results in one or two firms operating at scale, defining a but-for world with optimal innovation and competition relies on hypothetical predictions about future states of the world. Without evidence of illegal behavior to directly facilitate market power, US antitrust authorities may be limited in their scope to specific bottlenecks in competition (such as mandating the availability of options to select multiple competing products on a given platform), but relieving those bottlenecks may be crucial to incentivize entry by rival firms.

Indeed, encouraging entry into frontier technological markets is likely to have a particularly important effect on innovation. Studies using real foreign firm entry as a proxy for entry threat found that incumbent firms working in frontier technology sectors innovated more heavily along with higher firm entry while firms working in less innovative industries innovated less with higher entry due to the inability of these less technological firms to survive entry and hence reap the benefit of innovation.

V.C.3. Externalities of Market Power

In addition to consumer harm through anticompetitive conduct used to maintain market power and through reduced incentive to innovate and compete on price under natural monopoly, additional harm from market concentration may be externalized. Under the consumer welfare standard, the pursuit of empirical demonstrations of harm may cause a preferential weighting of some costs and benefits relative to others; such an analysis is likely not to consider (or at the very least, substantially discount) many externalities. One commonly discussed externality of market power is regulatory capture or the strong influencing effect of firms with high market concentration on regulatory and political decision-making. Indeed, research has shown that greater market concentration (such as from successful mergers) generally results in higher rates of lobbying and campaign expenditure by those firms.

In addition to the potential for subversive influence on regulation, if firms reach a large enough scale, they may be deemed essential or ‘too big to fail,’ as was the case in the banking industry during the 2008 financial crisis. Recent research has examined the possibility of so-called ‘system-critical’ firms in other industries outside of finance, such as electricity markets. High market concentration in generative AI products (which are likely to have increasing application across many future industries including areas of national security relevance), may similarly prevent regulators and politicians from allowing the natural economic process of firms going bankrupt and being replaced by more efficient or financially robust rivals. Bailing out essential businesses prevents short-term consumer harm, as discussed above, but such regulatory behavior must also consider the long-term consequences and incentives set across industries.

Furthermore, the consumer welfare standard may fail to fully encapsulate harm from monopsony power – exercise of monopsony power may increase consumer surplus, but at the expense of total social surplus. Dominant firms, particularly in a high-skill technical market such as generative AI, may develop monopsony power that impacts both labor markets and capital markets; The economic literature indicates that monopsony power has the potential to cause income inequality in both labor and capital markets due to a greater share of labor market surplus accruing for a monopsonist firm, increasing aggregate profits proportionally with shareholdings. Although these distributional effects are outside of the current scope of antitrust in the US, the economic literature has demonstrated that income inequality is likely to have negative implications for the broader economy and may dampen national economic growth.

What is evident in the discussion of externalities of market power is the tradeoff in harms that might be readily identified by the consumer welfare standard (and its emphasis on efficiencies to consumers and quantifiable harms) as opposed to those more applicable to a competitive conditions standard, which seeks to prevent market power outright, even at the cost of forgoing empirical means as the basis of such harm. One consideration for antitrust law in this sense is to weigh the efficiencies of scale economies against the potential social and political costs of economic power. Antitrust law, though currently focused in its analysis on demonstrable harm to consumers, may also serve the additional goal of curtailing the political power of large firms. US markets (without intervention) should not necessarily be expected to lead to socially optimal outcomes by default (as they are not directly incentivized to do this); rather, that is the role of antitrust authorities and other regulators to ensure. However, the solution to specific externalized harms may be better found outside of antitrust, such as addressing regulatory capture through optimal systems for selecting regulators rather than trying to dampen market shares of lobbyist firms.

While policy makers wrestle with standards development for mitigating emergent risks from frontier AI models, competition regulators must be equally prepared for the economic disruption and potential for market power to arise from those models. Though it is important that regulators do not quell the incentive to technologically innovate, an isolated focus on consumer prices may not account for economic rents afforded by market tipping or externalized harm to consumers. Even proponents of the consumer welfare standard acknowledge the existence of harms that may be difficult to empirically demonstrate in a case-by-case welfare analysis, such as reduced incentives for innovation, monopsony power, and measuring harm in zero-price markets.

Hence, future empirical research to explore labor markets, rates of innovation, and monetization strategies of generative AI firms may be informative of the potential for market power itself to have a negative impact on consumer surplus directly (or indirectly through externalities on individuals as members of the public). Furthermore, research to understand the behavior firms may take to ‘tip’ their respective markets may be informative of whether or not this conduct is entirely efficiency-based or can be understood as a cost incurred to maintain monopoly.

VI. Conclusion

Generative AI has the potential to transform the US and global economies; some go so far as to even consider it a general purpose technology – one akin to the steam engine, the railroad, or electricity. The goal of regulation around AI (including antitrust and other policy areas) should be to harness and incentivize that transformative potential for good, while mitigating potential harms. A key question for United States antitrust is whether or not an analysis of consumer welfare alone is sufficient to strike this balance, or if the recent conversation around a competitive conditions standard should be embraced instead.

Answering that question is challenging due to the inherent tradeoffs between these approaches. At the basis of the contention between the Chicago School’s consumer welfare standard and Neo-Brandeisian competitive conditions, there exists a disagreement as to whether regulators should attempt to acknowledge a wider variety of potential harms or if they should avoid ‘costly’ enforcement decisions. Further, underlying this tension (particularly as it relates to unilateral firm conduct) are differing beliefs around the relative harm caused by market failure, on the one hand, and government failure, on the other.

Despite rather long-standing presumptions that government failure tends to outweigh market failure, there is reason to believe that generative AI may at the very least shift these relative weights. Not only can generative AI reduce technical barriers for government regulation of markets, but also generative AI may exacerbate conduct by firms to both establish and earn profit from economic rents. Even though regulators have been clear that traditional forms of antitrust harm (such as price fixing) are still illegal regardless of the nuances of digital markets, firms with greater capabilities for understanding and profiting on the constraints of consumer rationality run the risk of drawing a line between consumer surplus and consumer welfare. Despite its name, the consumer welfare standard may not encapsulate such harms, given the lack of efficiency or surplus-based reasoning.

Potential harms in the upstream market for AI development relating to decreased competition under market tipping may fall even further outside of the current scope of consumer welfare-based antitrust. Specifically, an analysis of competitive prices and outputs alone may fail to address how market power itself contributes to externalized harms, even if that market power was initially achieved through greater efficiency. The promise of economic rent is what incentivizes today’s artificial intelligence firms to take on the financial risk of competing for the future of machine learning technology. While that promise is necessary to fuel innovation, the potential durability of those rents can discourage future incentives to innovate. Even while these types of harm are acknowledged in the economics literature, a core question is whether or not antitrust is the proper venue to address those harms. Those who want to expand the umbrella of antitrust may argue that government officials are necessarily “better guardians of the public interest than self-serving economic units,” even while acknowledging these agencies are imperfect. The counter argument, in favor of the simplicity of the consumer welfare standard, seeks to limit the discretion of antitrust enforcers, emphasizing a tradeoff between antitrust agencies being overly simplistic in their scope and becoming distracted by discretion.

To encapsulate both approaches, the aim of such regulation should not be to prevent firms from engaging in short-term rent-seeking behavior altogether, but rather to ensure that these rents do not afford lasting market power. Just as antitrust authorities are seeking to understand the role of market power in today’s technology markets, valuable lessons can be learned regarding competition for tomorrow’s markets for AI technology. If indeed, the market for generative AI ‘tips’ toward any respective firm and a policy of strict market regulation is enacted, that regulation itself can lead to rent seeking behavior. Hence, rather than pursue a policy of strict price regulation, a more effective approach may be to strategically target bottlenecks (such as exclusivity deals) that disproportionately inhibit competition. To specifically avoid losing out on the potential for the efficiencies of scale and incentives for innovation in the market for generative AI, regulators may seek to avoid such bottlenecks in the three key market inputs: talent, data, and computational resources.

Although the existing literature lays out a landscape of potential harms (both in the upstream market for AI development and downstream markets with AI deployment), future research is necessary to evaluate the existence and the extent of such harm. Research on price-setting strategies in online marketplaces may be helpful to understand the proliferation of dynamic pricing algorithms and to identify patterns for potential ‘algorithmic’ price fixing and price discrimination. With regard to behavioral discrimination, although there are interesting value-related questions concerning the relationship between surplus and welfare, more promising areas of research may seek to understand the impact of generative AI-based marketing on consumer purchasing decisions and the relationship between quality improvements and customer acquisition. In regards to the impact of ‘market tipping,’ separate research may be warranted to understand the rates of innovation by incumbent vs. entrant technology firms and whether or not highly concentrated markets still experience competitive pressure through ‘competition for the market.’

    Author