Sources and notes:
By serving the whole market without competition from other firms, this firm maximizes profit by charging at consumer WTP for all consumers.
By forcing competitors without scale out of the market, a “progressive” pricing firm in scenario (2) can effectively charge at each individual consumer’s willingness to pay. From an efficiency perspective, monopolistic first-degree price discrimination, as shown here, is welfare-enhancing. However, price discrimination in this manner also results in a distributional effect, maximizing surplus for producers and eliminating surplus for consumers.
Finally, consider scenario (3), in which many firms conduct progressive pricing in the manner shown in Figure 1. There are a number of circumstances in which this scenario is unlikely to occur, such as markets where consumers can use arbitrage (to avoid paying supracompetitive prices) or markets where products are not differentiated (as pricing discrimination requires some degree of market power to prevent competitors from stealing market share with discounting). Although theoretically, it is possible for this scenario to occur, real-world data from pricing experiments indicates that welfare-enhancing effects from pricing discrimination are not shared equally by all consumers, even if some individuals benefit. A 2017 study analyzing the welfare impacts of machine learning-generated personalized pricing found that although 60 percent of consumers benefited from personalized pricing and firm profitability increased by 55 percent, total consumer surplus actually declined by 23 percent. While it is yet unclear how the addition of generative AI technology into personalized pricing strategies will impact welfare, this experimental evidence indicates that the optimistic versions of so-called ‘progressive’ pricing do not accurately depict welfare losses, particularly for consumers with higher willingness to pay.
Non-obvious forms of price discrimination or personalization are likely to be even more difficult to regulate. For example, diverting consumers to higher-priced products through targeted advertising or promotional programs may achieve the same effect as personalized pricing and would be much more difficult to prove empirically, even though those consumers technically do have access to lower-priced products. Price discrimination may be especially inefficient and potentially harmful to consumers when willingness to pay depends in part on product mis-perceptions. Another strategy firms may employ is ‘price skimming’ in which firms set prices initially high to target consumers with higher willingness to pay before gradually lowering prices to meet further demand.
Although there are plausible mechanisms for personalized pricing to reduce consumer surplus, price discrimination is considered legal in many contexts. Price discrimination thus may be understood more as a symptom of market power rather than a cause of it per se. The primary role generative AI is likely to play in this sense is to increase the efficacy of personalized advertisement and marketing, affording the market power to then pursue profit through price discrimination. The exact extent to which marketing and customer persuasion are considered to lessen competition depends on certain market conditions and assumptions regarding the limits of consumer rationality. Personalized advertising by well-known brands or in markets with greater concentration may serve to lessen competition, unlike advertising that increases consumer awareness of substitute products.
Still, whether or not that conduct (the applied knowledge of consumer preferences in advertising) is problematic depends on value judgments and the core relationship between consumer surplus and consumer welfare. Indeed, if individual biases are to be equated with revealed preference as a socially optimal outcome, then engaging in addictive behavior would consequently be considered socially optimal as well. Revealed preference is a powerful and informative concept, but relaxing our assumptions around consumer rationality opens up additional pathways toward consumer harm that may not be accounted for by the traditional conception of consumer welfare.
As the information asymmetry between consumers and producers grows along with the deployment of continually more advanced machine learning models, US regulators need to be equipped to understand how firms can use those models to dampen competition, increase market share, and simulate demand. To the extent that antitrust standards do not account for consumer biases or bounded rationality, other regulatory avenues such as consumer protection may fill this void, particularly as such concerns arise in contexts such as discriminatory insurance premiums based on predisposition for health conditions. Regulation of the use of data and personalization may also extend into labor policy discussions as employers may move towards greater personalization in salary and compensation.
To elucidate the role of generative AI in personalized marketing, future research on the relationship between marketing and the price elasticity may be tailored specifically towards advertising that is the product of generative models. Furthermore, research on real-world dynamic pricing may demonstrate the extent to which consumers with higher willingness to pay actually pay higher prices for the same products or services through strategic behavior by firms, such as price skimming. An additional area of research may be to explore the relationship of advertising and consumer acquisition spend to product quality and price. If firms seek differentiation amongst specific consumers with high willingness to pay, they may be willing to incur greater costs in customer acquisition that do not directly translate to increases in product quality and ‘competition on the merits.’
V. Competition Concerns in the Market for Generative Artificial Intelligence
While there are evident challenges to address in regulating competition in downstream markets using emerging generative AI technology, there are also potential challenges to address in the upstream market (developers of generative AI products). These challenges may prove to be even more elusive under the consumer welfare standard than those discussed previously in this article. As I will discuss below, the market for generative AI has a number of characteristics that increase the likelihood of market concentration.
Additionally, although firms in this market are likely to gain efficiencies from scale, there are also a number of potential harms that may stem from any resulting market concentration. This dynamic highlights an important balance in antitrust policy between incentivizing rent-seeking firms to engage in productive innovation today, but not allowing those rents to endure and thus distort the landscape of competition tomorrow.
V.A. Precursors of Concentration in the Market for Generative AI
The development of generative AI models primarily depends on three key inputs: data, computational resources, and high-skill labor.
In today’s virtual economies, data is constantly being recorded and stored in the hopes of future monetization or sale to third-party data vendors. There are many methods in which firms may acquire data, such as through information collected in the normal course of business, web scraping, offering a new service, hiring people (collecting data through Mechanical Turk), purchasing data, accessing public or government data, or even using computer-generated data. Data is a non-rivalrous good, meaning that one firms’ access to data does not exhaust that data’s usage; it can subsequently be shared or sold to other firms in an industry without diminishing the utility for the first firm. However, if a firm gains a competitive advantage through differential access to data and owns the rights to a given proprietary source, data is likely to be treated as an excludable good, with rivals prevented from accessing it.
Furthermore, just because data is plentiful and used by a variety of firms in different contexts, this does not mean that all data is made equal. Specifically, data is differentiated by a number of factors collectively referred to as the “Five V’s of Big Data:” volume (amount of data), velocity (the speed at which data is collected or delivered), veracity (the accuracy or reliability of data), variety (the different types of data recorded), and value (the ability of data to be transferred into a valuable monetary resource). Not only are firms with greater scale far better positioned to record a large volume of data at more frequent intervals of a greater variety than their smaller rivals, but also, those firms with greater scale are likely to develop greater expertise in identifying data with more veracity and value. These factors make it likely that data serves as a barrier to entry in competition for the development of generative AI models. Current litigation surrounding the use of copyrighted materials in the training of generative AI models (regardless of whether or not these claims will be resolved as fair use), confirms the extent to which firms will go to avoid paying the otherwise prohibitive costs of purchasing data outright or developing data pipelines of sufficient quality and scale to compete in today’s generative AI market.
Generally speaking, in order to optimally increase the performance of a given AI model, one must increase the volume of training inputs (data) and increase the number of training iterations (computation). Indeed, the numbers from today’s most prominent generative AI model, ChatGPT of OpenAI, corroborate this reality: it costs close to $1 million USD per day to run ChatGPT-3. The upfront cost of acquiring training data, combined with the recurring cost of maintaining data infrastructure and running the AI models may be prohibitive for many firms, requiring significant scale before a generative AI firm can attain a marketable and profitable product. Indeed, the top 8 AI startups have each raised over $100M USD in their attempt to vie for the current and future markets of generative AI.
Although a robust data pipeline and significant computing power are vital to the success of any generative AI model, the scarcest resource may prove to be labor to develop and fine-tune the AI model. Talent is a particularly challenging barrier for new firms to overcome given that not only are firms with greater scale more capable of recruiting top-level AI talent through acquisition or by offering higher salary and/or reputational advantages than smaller firms, but also the supply of talented machine learning engineers cannot quickly respond to changes in demand; engineers have to go through approximately a decade of post-secondary schooling to attain a PhD in computer science, machine learning, or data science.
Furthermore, as the advancement of AI models continues to improve the labor productivity of programmers and computer science engineers, it is possible that these returns will be concentrated amongst those firms at the forefront of developing generative AI models. Even with the current state of generative AI technology, analyses of customer support agents have shown that labor productivity increases from 14 to as high as 34 percent while using generative AI. Although these copilot and coding assistant technologies can, on the one hand, flatten the distribution of labor productivity – with the greatest proportional benefit accruing for novices – on the other hand, skilled practitioner guidance is likely necessary to minimize risk and maximize productivity.
As the development of so-called weak AI models (those with less generalized intelligence) gives way to strong AI models (those with similar or greater generalized intelligence to humans), these models can possibly reach a point of intelligence where self-improvement is possible with lesser and lesser human intervention. Depending on the cost of computation, it is possible that such improvements will be far cheaper than hiring additional human machine learning engineers. Similar to data, such improvements can be considered non-rivalrous, but excludable in that all such productivity improvements can theoretically be shared by rival firms, but that a firm owning a given generative AI model can delay access to or prevent other firms from using proprietary versions of coding assistants. While in an optimistic scenario, the ability for an AI model to improve its model parameters, training and validation processes, and data acquisition would allow for widespread productivity gains, it is also possible that such gains are most concentrated amongst only a few leading models.
In addition to limitations in accessing data, compute, and labor, it is possible that the regulation of generative AI models will serve as an additional fixed cost and barrier to entry for new firms and benefit incumbents. Specifically, given the disruptive nature of generative AI technology and far-reaching impacts for national security, the legislative and executive branches of the US government have initiated a number of efforts to manage risk, regulate, and monitor frontier AI development. While such efforts are likely helpful from a national security and even existential risk standpoint, they also contradict the rather long-standing emphasis on open-source in the computer science and AI community. Open source can certainly serve as a buttress to competition in AI markets, but regulation may curtail these competitive features due to overriding policy concerns (or firms may even revert to proprietary ownership after attaining sufficient efficacy with their open-source models).
In summary, the current market for generative AI appears to show dynamism, with a number of firms vying to develop their own generative models, but such competition may be diminished as the market for generative AI models matures. Although technology firms and venture capitalists alike are currently placing their bets on which models are most likely to succeed in tomorrow’s markets for generative AI (whether it be Microsoft and OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, or other models), it is possible that only one or very few firms profitably attain scale due to the constraints on labor, data, computation, and regulation outlined above. Even today, only six firms (with each developing multiple models) have been able to bring to market cutting-edge generative AI models, overcoming the extremely high cost of computation and frequently acquiring multiple startups in order to meet talent needs.
Due to the high start-up costs these firms face and the low or even near zero marginal cost of each additional user using the software for a given generative AI model, it may be likely that the market for generative AI operates with economies of scale. Such economies of scale (and their correspondingly high market concentrations) present a challenge to regulators in that efficiencies of scale are frequently surplus-enhancing from the perspective of the consumer welfare standard, but also correspond with an increase in market concentration that is distinctly opposed to the competitive conditions standard. In order to understand the procompetitive and anticompetitive implications of scale in the market for generative AI, I provide below an overview on economies of scale and their procompetitive implications, as well as potential harms and anticompetitive effects that may arise from such ‘natural monopolies.’
V.B. Understanding Economies of Scale
In traditional economic thought, there is no long-term economic profit in perfectly competitive markets. Without market power, firms are considered price takers for whom price is driven down to the level of cost due to competition from other suppliers. Assuming firms can, in the long run, switch between markets to those with higher returns, even if a market initially does not have many competitors, eventually firms switching into more profitable markets causes prices to reach equilibrium with zero long-run profit. Even if firms have significant market share, they can still be considered price takers so long as fear of entry incentivizes them to keep prices down. Competition authorities in Europe seek to mandate such price competition through regulation of so-called ‘excessive pricing,’ whereas US antitrust authorities do not.
In practice, it is rare for markets to function in a perfectly competitive manner such that price is equivalent to cost. Firms rationally avoid commoditization and seek economic rents, differentiating their products to appeal to various consumer preferences, marketing heavily, and investing in new technologies to reap the benefit of intellectual property protections. In markets where producers differentiate themselves and there are many sellers, economists refer to this imperfect competition as monopolistic competition.
While short-term profits can be seen under monopolistic competition, long-run profits may result under oligopolies or monopolies (markets with one or very few firms). The existence of such economic rents for oligopolies or monopolies is afforded through barriers to entry (preventing other firms from entering the market). These barriers to entry can come from control over a key resource, government protections, or ‘natural’ barriers to entry such as economies of scale that lead markets to tend towards natural monopoly. Even without control over a key resource, a natural monopoly is similarly protected from price competition through the lack of entry by rivals. In a true natural monopoly, the incumbent firm is only able to generate long-run profit due to its sufficient scale; the entrance of any additional firms would constrain the scale of both the incumbent and the entrants, thus preventing the entrant from earning a profit.
Whether or not a market exhibits classical economies of scale and tends towards natural monopoly depends on the magnitude of the barriers to entry and the marginal cost of production relative to the total quantity demanded in the market. If the barriers to entry are high enough such that the scale required to minimize costs is equal to or larger than available consumer demand, only one firm is capable of profitably attaining scale, resulting in a natural monopoly. Traditional examples of natural monopoly include public utilities where the cost of one firm building infrastructure is so prohibitively expensive that the cost can only be offset by a firm’s ability to subsequently earn monopoly profit. Given that the marginal cost of serving additional customers is so low, goods and services sold by various natural monopolies can be considered non-rivalrous, but excludable. The analogy used by plaintiffs in some technology competition cases involving network effects is of a ‘gatekeeper’ firm charging a toll to others and discouraging conduct that would lead to rival ‘bridges.’
Assessing whether barriers to entry are caused by pro-competitive efficiencies or anti-competitive behavior, and relatedly whether or not the resulting profits indicate long-run positive economic returns, poses a difficult challenge to US regulators. US firms often invest heavily in fixed costs such as labor and technology infrastructure for many years prior to attaining profitability, making it hard to assess what the real profit margin is on present day goods. The risk of these investments must also be considered given there is a probability that a firm’s (often) billions of dollars in research and development do not result in prolonged future margins to recoup lost profit. If these “positive economic returns” or sustained profit do not approach zero in the long run, US antitrust authorities can investigate whether or not barriers to entry are caused by anticompetitive conduct. However, as discussed above, non-anticompetitive behavior can also facilitate natural monopoly. With economies of scale, the high market share of a firm may be entirely efficiency-based, and it is possible that no new entrants would rationally invest to the minimum scale needed to become profitable. To the extent that a firm does act as a ‘gatekeeper,’ defendants may claim that restraints of trade by the dominant firm are integral to maintaining service quality and maximizing output. Although the threat of entry may exist, the inability of entrants to profitably attain scale allows incumbent firms to retain profits proportional to this constrained threat of entry.
There are various forms of scale effects that can result in such ‘natural monopoly.’ These include (1) classical supply side returns to scale in which average costs decrease with greater production volume, (2) demand side returns to scale, such as consumers opting to use a given platform for network effects, and (3) learning-by-doing in which improvements in quality and lowering cost are attained by prior business and subject matter experience. It is likely that all three of these forms of economies of scale apply to varying degrees in the market for generative AI models.
Generally speaking, the high costs associated with acquiring data, running training iterations, and hiring machine learning engineers to fine-tune model parameters are fixed costs needed to create an effective generative model before selling it to businesses or consumers. The marginal costs associated with the actual sale of the generative model (whether on a per query or subscription basis) are almost entirely driven by the cost of compute for those additional training iterations – meaning that increasing scale tends to drive down the average cost of production, resulting in classical supply-side returns to scale. It is also likely that network effects cause demand-side returns to scale in the market for generative AI. As more users use a given generative AI platform, those users contribute to a greater understanding of optimal “prompt engineering” that can be shared with other users and model developers can separately use data from those customer interactions to run additional training iterations that improve model parameters (so-called “data network effects”). Finally, those AI firms that bring a successful generative AI product to market are likely to learn from those experiences in a manner that improves their ability to bring successive, superior products to market.
If a barrier to entry is efficiency-based, under the consumer welfare standard, there is an unclear path to demonstrate harm to consumers. An incumbent firm operating at scale simply provides preferable goods to consumers at lower prices than competitors without scale and, as a result, wins increasing market share. Even if such firms are able to charge supracompetitive prices (or above long-term average cost), analyses of what prices should be under a natural monopoly (without the EU equivalent of “excessive pricing”) are unlikely to take effect, given that the US antitrust regime prioritizes minimizing the cost to consumers from erroneous enforcement decisions. However, as I will elaborate in greater detail in the following section, such a view may fail to take account of other forms of harm or economic rents that may arise once firms in the market for generative AI attain sufficient market share, as well as the inefficiencies caused by the costs of maintaining a monopoly.
V.C. Theory of Harm Despite Economies of Scale
Although there are certain factors (as discussed in the preceding sections) that indicate market concentration may be likely as the market for generative AI matures, it is alternatively possible that many generative AI products will profitably attain scale – whether it be Meta’s LLaMA, Google’s Gemini, Anthropic’s Claude, OpenAI’s ChatGPT, or other startup models yet to arise. Such a scenario may be more likely to occur if such products differentiate themselves to different business needs (such as specialization within a given use case or prioritizing business-to-business applications vs. business-to-consumer) or consumer preferences (such as branding related to product safety in relation to data privacy or reduction in ‘jailbreaking’ risk).
However, no foundation generative AI models have yet to reach this point of profitable scale. Some even currently report losses as large as $500M annually, with an expectation that these losses will continue to grow before technology firms are eventually able to turn a profit on generative AI. Hence, in the alternative scenario (where the aforementioned barriers to entry tend towards natural monopoly), the dominance of such a firm introduces the risk of anticompetitive conduct and other externalized harm, even if this market share was initially won through greater efficiencies.
In this section I will discuss three types of harm that may arise from such market concentration itself: (1) Increased risk of abuse of dominance and the use of anticompetitive conduct to maintain market share, (2) reduced incentive to innovate, maintain product quality, or maintain competitive prices for firms who have ‘tipped’ their respective market, and (3) externalities of market power.
V.C.1. Abuse of Dominance to Maintain Market Share
Even if a firm initially wins market share from competitors through greater efficiencies or providing greater value to consumers, incumbency status may better position such a firm to deter future competition. Such unfair methods of competition may take a variety of forms, including bundling and tying, exclusive dealing or partnerships, self-preferencing, and acquisitions to stifle competition. This behavior may be particularly problematic when a dominant firm leverages its high market share in one market to influence the sale of its products or services in another market.
As an illustrative example, firms with existing market share in one technology market, whether it be a generative AI product or an earlier technology, may bundle the sale of (or simply set as the default) other complementary products such as cloud computing resources, image or audio-based generative AI products, or predictive AI products. In addition to these more traditional methods of anticompetitive conduct, future incumbent firms in generative AI (and advanced technology products more broadly) have access to new methods of disadvantaging rivals that may be more difficult to adjudicate with traditional antitrust analysis. For example, firms may deter innovation from rivals by reducing interoperability with rival products and even alter consumer expectations about the value of rival products by introducing new products or marketing. An additional emergent risk for deterring entry by rivals is the possibility for incumbent firms to simply hire the entirety of a rival’s labor force to disable the threat of competition. Such a risk is especially relevant to the market for generative AI given the aforementioned importance of talent in the development and maintenance of frontier AI models. If such strategies to maintain a firm’s incumbency advantage and deter entry are costly to implement, this conduct would be considered a welfare cost of monopoly – diminishing the efficiencies that led to natural monopoly in the first place.
Furthermore, firms may use methods of intertemporal price discrimination to impose switching costs. By offering new customers discounted prices and raising prices for existing consumers, firms can effectively target their most price-sensitive consumers and prevent competitors from profitably gaining market share. The ability of firms to impose such switching costs and achieve customer ‘lock-in’ stems from the inability of consumers to fully anticipate the future impact of switching costs and consumers underestimating the extent to which they are willing to search for competing products in the future. Such consumer ‘lock-in’ effects benefit incumbents and harm rivals. These effects may also incentivize producers to price well-below cost, competing vigorously in customer acquisition before subsequently raising prices on ‘locked-in’ consumers, a tactic that may be considered a form of predatory pricing.
What makes this behavior even more challenging to assess for regulators is that firms may choose not only to pursue future profits through increased prices, but also through lower quality standards and increased margins through cost cutting. In the market for generative AI, these changes in product quality may be particularly difficult for regulators or even consumers to observe in some instances, such as the unregulated sale of consumer data to third parties. It may be additionally challenging for regulators to decipher the competitive implications of customer ‘lock-in’ when firms compete in multiple markets. For example, if generative AI firms offer below cost business-to-consumer products, these firms may afford themselves economic rent in the sale of business-to-business AI products (assuming such products seek to market the advantages of a highly activity base of end-consumers).
In the US, market power demonstrated through prices that are systematically higher than costs is not a violation of antitrust law, but rather such firms must have behaved illegally in the pursuit of such market power. Although harms from behavior such as bundling, tying, and adversarial acquisitions may be found illegal and preventable through existing antitrust law and the consumer welfare standard, disadvantaging rivals through increased switching costs may not, given the difficulty in differentiating between short-term economic rents won through greater innovation and long term economic rents won through disadvantaging rivals.
Generally speaking, the US consumer welfare regime has not recognized predatory pricing as a viable practice for firms, and Chicago School economists often contend that predatory pricing is irrational due to the inability of firms to recoup lost profits caused by alleged predatory pricing. However, given that generative AI firms – and many firms in the digital technology sector – are willing to incur massive operating losses to attain sufficient scale and profitability, it is possible that ‘competition for the market’ (as discussed in greater detail below) incentivizes a form of predatory pricing that is not accounted for in a short-term analysis of consumer welfare.
V.C.2. Reduced Competitive Incentive to Innovate
Economists and European competition authorities have used the concept of ‘market tipping’ to describe the tendency towards natural monopoly that arises when a given firm attains sufficient market share over rivals, a phenomenon that is particularly common in competition between rival systems of integrated product and service offerings. To the extent that long-term economic rents are afforded to incumbent firms in markets that tend towards natural monopoly, additional harm to consumers may arise through facing less competitive pressure to innovate, improve quality, or lower prices.
As generative AI firms gain greater economies of scale (as discussed above) and as these AI products become integrated amongst various existing search engines, chatbots, and other web-interface systems, this may lead to market tipping in which a marginal, initial competitive edge results in one firm substantially outperforming rivals (in a manner that is not proportional to continued innovation and investment). Regulation of natural monopoly with this concept of market tipping may be difficult due to a combination of short-term welfare-enhancing effects and the possibility of long-term welfare harm. Specifically, prior to market ‘tipping,’ consumers are likely to experience positive surplus increases from network effects and economies of scale, but the incentive to improve upon these products or charge lower prices may be reduced in the long term.
Even without “problematic” anti-competitive conduct by the incumbent firm, it is possible that network effects, switching costs, information asymmetries and behavioral biases hinder markets from working properly. In some markets with scale economies, only one firm is capable of earning positive profits at a given point in time, leading to so-called “competition for the market.” In such a scenario, although competition may exist if a competitor threatens to overtake the entire market, if the incumbent firm is difficult to replace then competition concerns may be reflected in reduced innovation, lower quality, and higher prices than may exist in a but-for world. Factors that make this incumbency advantage more persistent include the offering of free essential services, aforementioned network effects, the capability for data-enabled learning, and the prevalence of single homing (whether due to consumer homogeneity or lack of product differentiation).
After such market tipping occurs, the question arises as to how innovation may differ for the incumbent firm in terms of both the degree of innovation and types of innovation relative to a competitive equilibrium. As for degree, some academic literature supports the notion that incumbent firms invest less into innovation than challenger firms, and also that challenger firms are more likely to pursue more disruptive and higher value innovations than incumbents. Other research seeks to identify the types of innovation that a monopolist firm (in this case, a natural monopolist) would be incentivized to undertake. The literature indicates that an incumbent firm with market power is incentivized to engage in process innovations (i.e., increasing margins through reducing production costs), but that the threat of entry may reduce an incumbent firm’s incentive to pursue product innovations.
Regardless of the types and degree of innovation pursued (even if across both product and process), it is important to consider the incentive for incumbent firms to first innovate and then pass on the benefits of innovation to consumers. Even amongst process innovations, which would theoretically increase total surplus through reduced costs, some research shows that consumer surplus may be reduced if monopolist firms constrain output. Further, in order to quell innovation, firms may identify and acquire startups before they are able to attain sufficient scale to challenge the incumbent. In the case of so-called ‘killer acquisitions’ where firms purchase another company for the sake of decommissioning a competing product, consumers receive none of the benefits of the more dynamic innovation typically undertaken by startups. Innovation under monopoly may still be incentivized, but the returns on innovation may serve to simply replace rents for the incumbent firm rather than get passed on to consumers. Despite the possibility of reduced incentive to innovate under economies of scale, empirically demonstrating how this incentive translates to consumer harm is likely to be very difficult because of the countervailing efficiencies that result in market tipping in the first place.
However, there are a number of factors that may mitigate this tipping effect and thus reduce the impact of market power on discouraging future innovation. These include multi-homing – the use of multiple rival services so that they are forced to compete on cost and that no single service has unrivaled access to consumer data – and cases where consumers have a preference for heterogeneity. While it is possible that consumers of generative AI products may continue to multi-home as the market matures, separate research has found that the increased prevalence of artificial intelligence-based firms and technology platforms has corresponded with an increase in economic rents, a trend which may continue with the advancement of generative AI products.
Alternatively, regulation around markets that may tend towards natural monopoly or ‘market tipping’ may take a variety of forms. Consumer protection can plausibly address issues around information asymmetries between firms and consumers (that facilitate higher switching costs) and mandating the sharing of data or interoperability of competing products may be sufficient to address factors that tend towards concentration. Under a scenario where firms charge consumers increased prices – or fail to improve quality and reduce costs – based on market power that is facilitated through natural economies of scale, regulators face a tough challenge as to how to properly adjudicate where consumer harm arises without the EU concept of ‘excessive pricing’ (even though long-run margins may not converge to zero).
However, direct forms of price regulation may now be possible to enact effectively due to increasing sophistication in data analytics – though such policies are likely to distort industry incentives. The issue is that such a policy would run a significant risk of losing out on the efficiencies that made market tipping possible in the first place. In a market that ‘naturally’ results in one or two firms operating at scale, defining a but-for world with optimal innovation and competition relies on hypothetical predictions about future states of the world. Without evidence of illegal behavior to directly facilitate market power, US antitrust authorities may be limited in their scope to specific bottlenecks in competition (such as mandating the availability of options to select multiple competing products on a given platform), but relieving those bottlenecks may be crucial to incentivize entry by rival firms.
Indeed, encouraging entry into frontier technological markets is likely to have a particularly important effect on innovation. Studies using real foreign firm entry as a proxy for entry threat found that incumbent firms working in frontier technology sectors innovated more heavily along with higher firm entry while firms working in less innovative industries innovated less with higher entry due to the inability of these less technological firms to survive entry and hence reap the benefit of innovation.
V.C.3. Externalities of Market Power
In addition to consumer harm through anticompetitive conduct used to maintain market power and through reduced incentive to innovate and compete on price under natural monopoly, additional harm from market concentration may be externalized. Under the consumer welfare standard, the pursuit of empirical demonstrations of harm may cause a preferential weighting of some costs and benefits relative to others; such an analysis is likely not to consider (or at the very least, substantially discount) many externalities. One commonly discussed externality of market power is regulatory capture or the strong influencing effect of firms with high market concentration on regulatory and political decision-making. Indeed, research has shown that greater market concentration (such as from successful mergers) generally results in higher rates of lobbying and campaign expenditure by those firms.
In addition to the potential for subversive influence on regulation, if firms reach a large enough scale, they may be deemed essential or ‘too big to fail,’ as was the case in the banking industry during the 2008 financial crisis. Recent research has examined the possibility of so-called ‘system-critical’ firms in other industries outside of finance, such as electricity markets. High market concentration in generative AI products (which are likely to have increasing application across many future industries including areas of national security relevance), may similarly prevent regulators and politicians from allowing the natural economic process of firms going bankrupt and being replaced by more efficient or financially robust rivals. Bailing out essential businesses prevents short-term consumer harm, as discussed above, but such regulatory behavior must also consider the long-term consequences and incentives set across industries.
Furthermore, the consumer welfare standard may fail to fully encapsulate harm from monopsony power – exercise of monopsony power may increase consumer surplus, but at the expense of total social surplus. Dominant firms, particularly in a high-skill technical market such as generative AI, may develop monopsony power that impacts both labor markets and capital markets; The economic literature indicates that monopsony power has the potential to cause income inequality in both labor and capital markets due to a greater share of labor market surplus accruing for a monopsonist firm, increasing aggregate profits proportionally with shareholdings. Although these distributional effects are outside of the current scope of antitrust in the US, the economic literature has demonstrated that income inequality is likely to have negative implications for the broader economy and may dampen national economic growth.
What is evident in the discussion of externalities of market power is the tradeoff in harms that might be readily identified by the consumer welfare standard (and its emphasis on efficiencies to consumers and quantifiable harms) as opposed to those more applicable to a competitive conditions standard, which seeks to prevent market power outright, even at the cost of forgoing empirical means as the basis of such harm. One consideration for antitrust law in this sense is to weigh the efficiencies of scale economies against the potential social and political costs of economic power. Antitrust law, though currently focused in its analysis on demonstrable harm to consumers, may also serve the additional goal of curtailing the political power of large firms. US markets (without intervention) should not necessarily be expected to lead to socially optimal outcomes by default (as they are not directly incentivized to do this); rather, that is the role of antitrust authorities and other regulators to ensure. However, the solution to specific externalized harms may be better found outside of antitrust, such as addressing regulatory capture through optimal systems for selecting regulators rather than trying to dampen market shares of lobbyist firms.
While policy makers wrestle with standards development for mitigating emergent risks from frontier AI models, competition regulators must be equally prepared for the economic disruption and potential for market power to arise from those models. Though it is important that regulators do not quell the incentive to technologically innovate, an isolated focus on consumer prices may not account for economic rents afforded by market tipping or externalized harm to consumers. Even proponents of the consumer welfare standard acknowledge the existence of harms that may be difficult to empirically demonstrate in a case-by-case welfare analysis, such as reduced incentives for innovation, monopsony power, and measuring harm in zero-price markets.
Hence, future empirical research to explore labor markets, rates of innovation, and monetization strategies of generative AI firms may be informative of the potential for market power itself to have a negative impact on consumer surplus directly (or indirectly through externalities on individuals as members of the public). Furthermore, research to understand the behavior firms may take to ‘tip’ their respective markets may be informative of whether or not this conduct is entirely efficiency-based or can be understood as a cost incurred to maintain monopoly.
VI. Conclusion
Generative AI has the potential to transform the US and global economies; some go so far as to even consider it a general purpose technology – one akin to the steam engine, the railroad, or electricity. The goal of regulation around AI (including antitrust and other policy areas) should be to harness and incentivize that transformative potential for good, while mitigating potential harms. A key question for United States antitrust is whether or not an analysis of consumer welfare alone is sufficient to strike this balance, or if the recent conversation around a competitive conditions standard should be embraced instead.
Answering that question is challenging due to the inherent tradeoffs between these approaches. At the basis of the contention between the Chicago School’s consumer welfare standard and Neo-Brandeisian competitive conditions, there exists a disagreement as to whether regulators should attempt to acknowledge a wider variety of potential harms or if they should avoid ‘costly’ enforcement decisions. Further, underlying this tension (particularly as it relates to unilateral firm conduct) are differing beliefs around the relative harm caused by market failure, on the one hand, and government failure, on the other.
Despite rather long-standing presumptions that government failure tends to outweigh market failure, there is reason to believe that generative AI may at the very least shift these relative weights. Not only can generative AI reduce technical barriers for government regulation of markets, but also generative AI may exacerbate conduct by firms to both establish and earn profit from economic rents. Even though regulators have been clear that traditional forms of antitrust harm (such as price fixing) are still illegal regardless of the nuances of digital markets, firms with greater capabilities for understanding and profiting on the constraints of consumer rationality run the risk of drawing a line between consumer surplus and consumer welfare. Despite its name, the consumer welfare standard may not encapsulate such harms, given the lack of efficiency or surplus-based reasoning.
Potential harms in the upstream market for AI development relating to decreased competition under market tipping may fall even further outside of the current scope of consumer welfare-based antitrust. Specifically, an analysis of competitive prices and outputs alone may fail to address how market power itself contributes to externalized harms, even if that market power was initially achieved through greater efficiency. The promise of economic rent is what incentivizes today’s artificial intelligence firms to take on the financial risk of competing for the future of machine learning technology. While that promise is necessary to fuel innovation, the potential durability of those rents can discourage future incentives to innovate. Even while these types of harm are acknowledged in the economics literature, a core question is whether or not antitrust is the proper venue to address those harms. Those who want to expand the umbrella of antitrust may argue that government officials are necessarily “better guardians of the public interest than self-serving economic units,” even while acknowledging these agencies are imperfect. The counter argument, in favor of the simplicity of the consumer welfare standard, seeks to limit the discretion of antitrust enforcers, emphasizing a tradeoff between antitrust agencies being overly simplistic in their scope and becoming distracted by discretion.
To encapsulate both approaches, the aim of such regulation should not be to prevent firms from engaging in short-term rent-seeking behavior altogether, but rather to ensure that these rents do not afford lasting market power. Just as antitrust authorities are seeking to understand the role of market power in today’s technology markets, valuable lessons can be learned regarding competition for tomorrow’s markets for AI technology. If indeed, the market for generative AI ‘tips’ toward any respective firm and a policy of strict market regulation is enacted, that regulation itself can lead to rent seeking behavior. Hence, rather than pursue a policy of strict price regulation, a more effective approach may be to strategically target bottlenecks (such as exclusivity deals) that disproportionately inhibit competition. To specifically avoid losing out on the potential for the efficiencies of scale and incentives for innovation in the market for generative AI, regulators may seek to avoid such bottlenecks in the three key market inputs: talent, data, and computational resources.
Although the existing literature lays out a landscape of potential harms (both in the upstream market for AI development and downstream markets with AI deployment), future research is necessary to evaluate the existence and the extent of such harm. Research on price-setting strategies in online marketplaces may be helpful to understand the proliferation of dynamic pricing algorithms and to identify patterns for potential ‘algorithmic’ price fixing and price discrimination. With regard to behavioral discrimination, although there are interesting value-related questions concerning the relationship between surplus and welfare, more promising areas of research may seek to understand the impact of generative AI-based marketing on consumer purchasing decisions and the relationship between quality improvements and customer acquisition. In regards to the impact of ‘market tipping,’ separate research may be warranted to understand the rates of innovation by incumbent vs. entrant technology firms and whether or not highly concentrated markets still experience competitive pressure through ‘competition for the market.’