Regulatory Approaches and Core Objectives
In her opening remarks, Fiona Schaeffer of Milbank LLP delineated three predominant categories of regulatory approaches that have emerged across jurisdictions:
- Market-driven: The market-driven approach taken by the United States emphasizes reliance on existing legislation and frameworks. Courts are tasked with interpreting how existing antitrust, consumer protection, and intellectual property laws apply to AI, rather than introducing new statutory regimes.
- State-driven: In contrast, China has taken a more centralized, top-down approach. This approach is characterized by powerful regulatory instruments applicable to privacy, data governance, as well as AI deployment.
- Rights-driven: The European Union’s approach derives regulation bottom-up from the perspectives of consumer rights and human rights. As exemplified by the developments with the EU AI act, the regulation tends be prescriptive and imposes a gradual schedule of obligations based on the level potential risks.
Despite the differences in implementation approaches, a loose norm of international collaboration has also emerged, typified by non-binding guidelines. While lacking enforceability, these efforts are shaping consensus and fostering regulatory convergence in some areas. Examples include the G7 AI agreements, the OECD AI principles, and other multilateral dialogues among competition agencies.,
In broad terms, the global regulatory efforts share four common objectives:
- Safety and Accountability: Preventing AI from facilitating harm, whether through misinformation, fraud, or physical threats. This underscores the need for AI systems to be both safe and subject to clear accountability mechanisms.
- Human Rights Protections: Ensuring AI does not entrench bias, infringe privacy, or curtail freedom of expression. This includes scrutiny of algorithmic decision-making in areas like healthcare and employment, where automated systems have already demonstrated the capacity for algorithmic biases.
- Transparency: Regulators and stakeholders are increasingly demanding visibility into how AI models function—including inputs, training data, and decision-making logic. Explainability, auditability, and documentation could be key issues for potential future compliance frameworks.
- Ethical Innovation: Examples like Japan’s human-centric AI principles highlight a desire to align AI deployment with societal values. Policymakers are calling for responsible innovation that supports both economic growth and democratic norms.
Schaeffer also drew attention to the elephant in the room. As predicted by technologists such as Bill Gates, the constant advancements in AI will inevitably raise fundamental questions about the role of work, the role of minimum wage, and division of economic surplus from AI driven productivity. She argued that the current debate about AI should not shy away from these societal issues.
U.S.: Enabling Innovation, Guarding Against Entrenchment
Haidee Schwarz of OpenAI emphasized the nuanced stance taken by the Department of Justice (DOJ) and the Federal Trade Commission (FTC). In the joint statement with the CMA and EC in July 2024, the U.S. antitrust agencies identified fair dealing, interoperability, and choice as key factors that would generally enable competitiveness and foster innovation. In January 2025, the FTC released a staff report on AI partnerships and investments that highlights three areas to watch regarding potential competition implications, including access to key inputs, contractual and technical switching costs, and access to sensitive information.
The new FTC and DOJ leadership have also made clear statements that they will aggressively enforce against big tech companies to ensure competitiveness and innovation, including in relation to AI. The emphasis on robust antitrust enforcement is underscored by a particularly active month of April in Washington, D.C., where multiple high-profile tech-related trials are taking center stage. These include the DOJ’s cases against Google regarding both Search and Ad Tech, the DOJ’s case against Apple regarding smartphone markets, the FTC case against Amazon regarding its online market place services, and the FTC case against Meta regarding its social networking services.
On the other hand, there is also a strong focus on avoiding overregulation that may deter innovation or entrenche market dominance. In his statement on the FTC AI staff report, Chair Ferguson Ferguson of the FTC noted the duality of AI, as both a productivity engine and a potential entrenchment tool. Striking the right balance—avoiding stifling regulation while remaining vigilant against monopolistic practices will likely remain the main theme of U.S. AI policy for the foreseeable future.
EU and UK: From Alarm to Nuance
Tone Oeyen of Freshfields observed a tonal shift in European regulators. Just like the AI industry itself, the antitrust and regulatory approaches towards the AI value chain in Europe are rapidly evolving. Initially, bodies like the UK’s Competition and Markets Authority (CMA) and European Commission (EC) both raised alarms about concentration risks in the AI value chain, particularly at the infrastructure level, with concerns centered on: concentration of computing power and cloud services, barriers to entry for smaller AI developers, and exclusive partnerships that could limit competition.
However, recent remarks by senior Commission officials suggest a growing recognition of the dynamic nature of AI markets. The rapid rise of new entrants like DeepSeek has challenged assumptions about the inevitability of dominance by incumbent players. The CMA has updated its AI foundational model report to reflect changes in the ecosystem, including increased open-source activity and wider diffusion of AI capabilities. Meanwhile, the EU has launched consultations and policy briefs exploring vertical integration risks, algorithmic gatekeeping, and market access conditions.
The evolving view is expected to shape existing merger control and conduct investigation. For example, both the CMA and the EC have been stretching the traditional merger control toolkit to fit the AI playbook, reflecting the fact that transactions in the space are being conducted through new form of partnerships. There has been rise in conduct investigations, from the French competition authority’s investigation into Nvidia’s unilateral conduct in 2023 to the EC’s investigation into the alleged tying of the Azure Cloud Services to Windows server software for corporate customers.
Brazil: A Regional Leader in AI Regulation
Victor Oliveira Fernandes of the Administrative Council for Economic Defense of Brazil (CADE) described Brazil’s proactive efforts to lead in Latin America. The country’s AI bill, heavily influenced by the EU AI Act, also adopts a risk-based framework. Unique to Brazil is the creation of a national AI regulator and government network sharing the responsibilities instead of a single regulator.
CADE’s inputs in the Brazilian AI Bill included:
- Mandating that regulatory bodies share information with CADE when competition issues arise.
- Securing investigative access to high-risk AI training and test data.
- Calling for differentiated compliance obligations for small and median enterprises to avoid stifling innovation.
Brazil has also launched investigations into major AI partnerships, including Microsoft/OpenAI and Amazon/Anthropic, to determine whether they should have been subject to merger review. These cases reflect CADE's broad interpretation of joint ventures and data-sharing arrangements under its competition law. Another active investigation by CADE concerns Meta’s alleged leveraging of its dominant position in social media to gain unfair advantage in the development of AI models. While some of the AI-related conducts are novel, it is still up for debate whether novel theories of harm are needed to address these cases.
Limits of Ex-Ante Regulations
When asked about ex-ante regulation, Schaeffer offered a skeptical view of its viability, arguing that traditional legislative cycles will have difficulty keeping pace with exponential AI advancement. Historically, the treatment of privacy and protection of privacy, especially the approach in the EU with GDPR serves as a good analogy to assess potential ex-ante AI regulation. The jury is still out on whether the economic costs and impact on innovation and entry can be justified by the protection demanded by the general population.
Instead, she advocated for global consensus on narrow, high-impact issues such as:
- Deepfakes, which threaten democratic discourse and consumer safety.
- Algorithmic bias in critical areas that could impact fundamental human rights, such as healthcare.
While some layers of the AI stack remain competitive, others like cloud infrastructure and behavioral data aggregation, exhibit oligopolistic traits that may warrant closer scrutiny. Similarly, enforcers may need to be innovative in their approach to measuring the impact of AI-related initiatives and collaborations that do not trigger traditional merger-control thresholds. For example, content aggregation and behavioral data collectors may have an enduring competitive advantage that can impact both technological evolution and downstream market structure.
Algorithmic Collusion
The panelists expressed a shared concern about algorithmic collusion, particularly in the context of third-party pricing software. Schaeffer observed that when firms delegate pricing decisions to shared AI models, they risk engaging in conduct that resembles hub-and-spoke cartels. Conversely, if such software relies solely on public data and its recommendations are not consistently followed, the users are more likely to remain on safer legal ground.
Another emerging risk is not just intentional collusion, but the inadvertent alignment of pricing strategies by independent algorithms that learn from repeated interactions with one another. Under current legal frameworks, compliance exposure may depend heavily on the level of human oversight and the nature of the data inputs involved. As theoretical and empirical research into autonomous algorithmic collusion evolves, regulators are likely to adopt more nuanced positions. One thing is certain: the widespread use of pricing algorithms and the legal challenges they raise will remain a recurring theme in the broader AI policy debate.
As enforcement ramps up, practitioners in the AI space would be advised to consider the following:
- Review third-party AI tools carefully, especially pricing software. Understand how they are marketed and whether they require or encourage data sharing with competitors.
- Document independent decision-making when using AI inputs. Ensure that human judgment plays a central role and that AI outputs are not blindly accepted.
- Stay ahead of regulatory trends, particularly in jurisdictions where AI-specific guidance is rapidly evolving.
- Engage with regulators proactively. Smaller firms in particular may need to clarify their capabilities and limitations in response to inquiries.
Firms should also anticipate greater use of AI tools by regulators themselves. From predictive models that flag unusual pricing patterns to analytics dashboards that map partnership structures, enforcement agencies are becoming more sophisticated. For example, CADE is investing in AI tools to assist investigations. Its use of machine learning tools to detect collusive patterns in public procurement is now being adapted for AI-related probes.
Conclusion
Global AI regulation is coalescing around shared principles, but implementation approaches remain fragmented. For antitrust practitioners, this moment offers both challenge and opportunity. The profound implications of AI regulation are aptly summarized as “it is like teaching your toddler how to walk.” The AI toddler may soon by sprinting. The need for agility and informed advocacy has never been greater, regulators and antitrust practitioners worldwide will also be racing address tackle the persistent timing disconnect between industry innovation and the regulatory process.