chevron-down Created with Sketch Beta.

ARTICLE

Antitrust Enforcement in Artificial Intelligence: How Would Economists Regulate AI?

James-Francois Fiocchi

Antitrust Enforcement in Artificial Intelligence: How Would Economists Regulate AI?
MTStock Studio via Getty Images

In October 2024, the ABA Antitrust Section’s Economics Committee and the Media and Technology Committee hosted a webinar to discuss the topic of Artificial Intelligence (AI) enforcement including perspectives from legal and economic experts. Sergei Zaslavsky of O’Melveny & Myers moderated the panel and was joined by: Keith N. Hylton of Boston University’s School of Law; Catherine Tucker of MIT Sloan School of Management; Aurélien Portuese of Compass Lexecon; and Erik Hovenkamp of Cornell Law School. The discussion focused on core legal and economic concepts like network effects, barriers to entry, dynamic competition, and technological inflection points bridging the gap between antitrust law and the rapidly evolving field of AI.

The webinar began with an economic discussion of whether there are any lessons to be learned from the lack of early regulation of the internet and whether there are any parallels with the evolving field of AI. Sergei Zaslavsky noted that “enforcers and politicians felt that regulatory efforts were ‘hands off’ when the internet began” and yet, that lack of “intervention” may have contributed to the growth of the internet and technology sectors. The panelists generally agreed that early regulatory action may end up stifling innovation and that maintaining that balance between too little and too much regulation should be a center focus of AI policy design efforts along with developing the right market definitions. However, the panelists questioned the extent to which the history of the internet can serve as a useful indicator about regulating AI.

Catherine Tucker pointed out that the lack of early, targeted regulation when the internet began may serve as a useful lesson because we can learn from the lack of past regulatory efforts to develop better forward-thinking policy that avoids overcompensating or “over-regulating irrelevant aspects.” Dr. Tucker’s point was that forward looking solutions — specifically those centered around designing effective policies in novel markets like GenAI — often require an analytical, economic and strategic base. As she implies, technological change naturally promotes competition and designing sound policies requires that we anticipate how a new market-level event impacts competition as well as supply-side and demand-side expectations and behavior. Simply put, Dr. Tucker emphasized the importance of understanding how a new change implemented within a current market (e.g., AI in the tech market) impacts the behavior of consumers and sellers in that market and other markets.

Keith Hylton noted that ex-ante regulation would not necessarily be good for the AI markets and argued that we do not know enough about this market to understand how it will develop and what the economic issues, relevant to antitrust law, will be. He further noted that much of U.S. antitrust law in the past has been made through private litigation and suggested that we should stick with that approach here as well.

Aurélien Portuese cautioned that the core characteristics and differences between internet and AI require a more fundamentally tailored approach to antitrust policy due to their different purposes as well as the differences in the structure of the competitive business models (open source versus closed source). Dr. Portuese discussed the evolutionary differences between open source versus closed source competition models, noting that the internet was originally developed as an open-source method of communication which, with the emergence of proprietary apps, websites, and platforms, became very much a closed technology in the 1990s. However, he also mentioned that AI was developed by industry and is largely closed source. Dr. Portuese suggested that antitrust concerns might not necessarily exist on the software side but instead be present on the infrastructure side noting the difference between software versus hardware forms of competition.

Erik Hovenkamp also cautioned that internet and AI are too broad conceptually to be useful antitrust categories, because “if you say that a product is an internet-based product, or that it uses AI in some capacity, that does not necessarily tell you what type of antitrust policy we might want to maintain in that market.” Also, he agreed that there might have been examples of antitrust underenforcement in certain areas within the tech sector.

The panel also examined AI-specific barriers to entry, network effects (or the lack thereof), data feedback loops, labor shortages, and infrastructure costs. Dr. Hovenkamp argued that just as scale economies and network effects can be an entry barrier when they are strong, the data feedback loop can also be an entry barrier and as such should be considered as part of the analysis. Referencing the U.S. v. Google matter, Dr. Hovenkamp detailed how some AI products, such as search engines, benefit from self-reinforcing feedback loops where the search algorithm gets better the more people are using it.

Conversely, Dr. Tucker disagreed with using the term AI network effects preferring economies of scale since she had previously stated that network effects (in the traditional sense) require that a user benefits from another user. Dr. Tucker also stated that network effects cannot be some kind of indirect mechanism whereby a sudden shift in costs arise from simply increasing the customer base. Instead, she urged focusing on switching costs, which can lock users into specific platforms, potentially stifling competition.

On another note, Dr. Portuese noted that while neural networks and language models rely on infrastructure, chips, cloud services, and data from other companies, they require advanced technological expertise but not necessarily at a large corporate scale.

Regarding labor shortages, Dr. Hylton acknowledged the temporary scarcity of engineering talent but expressed confidence in the market’s ability to adapt and argued that “whether engineering talent is a barrier to entry depends on the market demand for what that talent produces” and by itself does not necessarily guarantee a barrier to entry or an advantage over rivals.

The panel also highlighted the transformative potential of AI in key industries such as health care and education. Dr. Portuese stressed the importance of fostering AI in health care: “[We] always think about AI in the tech sector, but we also have to think about AI in the healthcare sector when we use AI to detect cancer.” Dr. Tucker was also optimistic about AI’s role in lowering educational disparities but noted that there are structural problems that still need to be fixed in the home that might hamper the benefits from scaling when using AI in education.

Finally, the panel addressed global competitiveness and the speculative nature of artificial general intelligence (AGI). Dr. Hylton and Dr. Portuese both emphasized the importance of maintaining U.S. leadership in AI particularly in the face of competition from countries like China. Regarding AGI, the panelists agreed that concerns about superhuman intelligence are premature. Dr. Portuese noted, “I mean, we all know that emotional intelligence is extremely hard to get for AI. So, I think we’re pretty safe as humans.”

Overall, the panel urged caution in regulating AI, advocating for thoughtful and flexible regulations to harness AI’s benefits, and maintaining a balanced approach that fosters innovation while addressing specific antitrust concerns with a soft touch to ensure fair competition.

    Author