chevron-down Created with Sketch Beta.

ARTICLE

ABA Antitrust Spring Meeting Highlights a Dynamic Global Landscape for AI Regulation

Qiheng Chen and Yujie Qian

ABA Antitrust Spring Meeting Highlights a Dynamic Global Landscape for AI Regulation
Qi Yang via Getty Images

Gathering from April 10th to 12th, 2024, the American Bar Association Antitrust Law Section’s 72nd Spring Meeting drew over 4,100 professionals in competition, consumer protection, and data privacy from around the world. The Spring Meeting offered unparalleled insights into both domestic and international perspectives on a wide array of subjects. Notably, the evolving regulation of artificial intelligence (AI) emerged as a recurring theme during panel discussions, reflecting the multifaceted considerations in this rapidly advancing field. While it remains to be seen how antitrust cases involving AI across the globe will play out, the distinct regulatory approaches and priorities observed in major antitrust jurisdictions suggest that developers and adopters of AI will likely face a diverse range of compliance challenges in the future.

Potential benefits and harms of AI on competition

In practical terms, AI/ML (Machine Learning)-powered tools are poised to bring significant efficiency enhancement for both antitrust enforcement and compliance. Automation and more sophisticated analytical tools will bolster companies’ internal monitoring capacity, and there may be increasing expectations to leverage these tools for proactive detection and self-reporting of potential antitrust violations. As enforcement agencies gradually embrace these AI-assisted data-processing technologies, they may adjust expectations and articulate higher standards for the fulsomeness and promptness of discovery, disclosure, and compliance.

The Chair’s Showcase provided an extensive overview of the potentially transformative impact of AI and identified the lack of competition as one of the core threats to a desirable future of AI applications. The discourse on AI regulation overlaps and resonates with many key issues in the debate about tech regulation, including innovation, entry, and access.

As a disruptive technology, AI models and their commercial applications have the potential to introduce competition to previously concentrated markets. By improving efficiency in the production process and facilitating the delivery of personalized products to consumers, AI creates go-to-market opportunities and lowers the barriers to entry for smaller new players.

The prospect of vertically integrated AI platforms and the strong presence of established technology companies in the development of next-generation AI tools imply that the evolution of the AI sector may largely hinge on the current competition conditions within the broader digital sector. The strong complementarity between AI and the existing layers in the tech stack introduces risks of exclusionary conduct by incumbent companies. Such exclusionary conduct may involve leveraging existing platforms to favor affiliated products and services over competitors, as well as foreclosing nascent competitors from access to key inputs such as data, cloud computing, or specialized hardware.

While the adverse effects of under-regulation are readily apparent, the potential risks of creating regulatory entry barriers are equally significant. Regulation for legitimate public policy goals other than antitrust could create compliance costs and constrain data flows, thus inadvertently disadvantaging smaller players and raising barriers to entry. These trade-offs are discernible in the diverse approaches adopted by jurisdictions globally.

Different regulatory approaches to AI across jurisdictions

Antitrust agencies agree on the general benefits and harms on competition and recognize the importance of aligning AI systems with human-centered principles such as fairness, transparency, and accountability. Furthermore, there is a broad consensus that any regulation must strike a fine balance between unleashing the potential of AI and mitigating adverse effects.

On both sides of the Atlantic, regulators are taking specific interest in cloud service providers’ and tech platforms’ investments in and partnerships with AI companies. Both the US Federal Trade Commission (FTC) and the UK Competition and Markets Authority (CMA) have launched preliminary inquiries into the interconnected partnerships between leading AI startups and their established tech partners, including Alphabet, Amazon, Anthropic, Microsoft, Mistral AI, and Open AI.

Divergence across jurisdictions becomes apparent in policy choices as well as the timing and pace of regulatory actions. One fundamental question is whether the governance of AI merits standalone legislation. The European Union (EU) passed its AI Act in March 2024 and most parts of the law will become applicable in two years. In 2023, China’s State Council listed the drafting of an AI Law on its legislative plan, and a scholars’ draft that could feed directly into legislative work bears similar features to the EU AI Act, such as a risk-based classification of AI systems. In comparison, policy makers in the United States remain divided on whether a new law on AI is warranted.

Generative AI models are trained on huge swaths of data that may involve the unauthorized use of copyrighted materials scraped from the internet. As a result, copyright concerns have emerged as one of the most active areas in AI-related law, with a key challenge being how to measure the contribution of individual pieces of training data to the overall model performance. A wave of lawsuits has arisen as creators claimed copyright infringements by AI developers and sought damages. The panel on “AI, Antitrust, and Media Developments” discussed some of the early efforts addressing related issues. In the UK, Getty Images’ copyright infringement claims against Stability AI were granted a trial, and the court will determine whether pre-trained models constitute an “article” under the Copyright, Designs and Patents Act in the UK. The concept of “article” has traditionally been reserved for tangible items. Other jurisdictions, including Japan, Singapore, and South Korea, have adopted “fair use” copyright rules “with the firm intention of removing uncertainties for their tech industries and positioning themselves in the AI race, unencumbered” – in particular, Japan has a flexible copyright exception for “non-enjoyment” purposes.

The panel discussion on “Global Big Tech Enforcement and Impact” highlighted the natural overlap between big tech antitrust enforcement and AI regulation. In addition to the EU AI Act, the European Parliament has called the European Commission to consider whether the Digital Markets Act (DMA) should also cover cloud and generative AI service providers. Some other jurisdictions have leaned toward a more permissive and principle-based approach, without imposing DMA-style ex-ante regulations on matters beyond AI safety. At the Chair’s Showcase, the CMA’s Chief Executive, Sarah Cardell, provided an update on the principles for guiding the foundation model sector towards positive outcomes in competition and consumer protection. The CMA proposed a set of principles to guide the development and deployment of foundation models, including access, diversity, choice, fair dealing, transparency, and accountability.

Diverse AI-related compliance challenges around the world

Amid a trend of firms racing to embrace AI applications, in-house counsels reflected on the compliance issues facing AI adopters. Risks could manifest in five primary ways: AI applications being less competent than expected or being too competent for organizations to handle, training the AI models incorrectly, using the AI models in unlawful ways, generating responses that cannot be explained, and violating ethical standards through generative AI.

Antitrust risks tend to occur when the technology becomes too competent, resulting in antitrust violations such as tacit collusive behaviors without human awareness. The impact of algorithmic pricing could be further magnified by the wide adoption of generative AI models. In recent Statements of Interest, US agencies take the position that acting in concert through a third-party algorithm may constitute per se unlawful hub-and-spoke agreements, even without direct communication with each other. In China, the misuse of algorithms in personalized pricing has been identified as a potential abuse of dominance in the revised Anti-Monopoly Law and cited as a relevant factor in finding dominance in two landmark investigations of Alibaba and Meituan, respectively, in 2021.

Businesses may also be bound by other guiding principles besides competition concerns. For example, in 2023, the G7 introduced 11 voluntary guiding principles and a corresponding code of conduct, aimed at fostering international cooperation in the governance of AI, focused on safe, secure, and trustworthy AI. Similar to voluntary pledges made by AI firms, these principles and code of conduct in are essentially industry self-regulation in the absence of regulations. Self-regulation stands in contrast to jurisdictions that have either sprinted ahead with regulations or are preparing to do so such as the EU, China, and India. As AI developers and adopters may face a fragmented AI regulatory landscape in the foreseeable future, they would be advised to pre-emptively assess antitrust risks in their AI systems, self-evaluate AI alignment, and tailor compliance strategies to different jurisdictions.

The authors are economists at Compass Lexecon. All views expressed are solely those of the authors.

    Authors