April 05, 2021

The Computational Antitrust Project

Maura Carey

Technology has led to an explosion in the volume of data that antitrust regulators need to process in order to enforce antitrust laws. Legal practitioners in other fields are already seeing how computational techniques like information visualization, natural language processing, deep learning simulations, and machine learning can enhance their work.

The Computational Antitrust Project at the Stanford Codex Center seeks to develop ways to help antitrust enforcers, policymakers, and firms subject to antitrust harness the power of legal informatics. The Project brings together over 50 agencies from around the world and 35 leading academics in economics, law, and computer science to foster the automation of antitrust procedures and improve antitrust analysis.

Legal informatics are not intended to replace human value judgments and decision-making processes as the primary mode of economic regulation. Rather, computational tools can empower regulators and practitioners to conduct the kind of analysis necessary to apply existing antitrust frameworks to the 21st century economy. The Stanford Computational Antitrust Project is bringing together technologists, legal scholars, and economists to think creatively about how to equip agencies with the tools they need to bring global antitrust enforcement into the digital world. Legal informatics will prove especially helpful in three areas of antitrust law: anticompetitive practices, merger control, and the design and monitoring of antitrust policies.

First, computational antitrust can help antitrust agencies shift to a proactive model of policing anti-competitive practices. Antitrust agencies today often rely on reactive methods of identifying anti-competitive practices like leniency applications. Blockchain-based smart contracts and algorithmic pricing mechanisms have made it easier for companies to implement and sustain collusive agreements—making reactive methods far less effective. Natural language processing technology can boost antitrust agencies’ ability to detect patterns that suggest illegal intent.

Second, computational antitrust can make it easier for agencies to assess the legality of a merger when confronted with millions of documents to review and a limited time in which to review them. Agencies can use also computational tools to create dynamic models to better predict the competitive effects of proposed mergers.  Computational tools can also help address information asymmetries in the merger review process by allowing agencies and companies to share data in real time. Blockchain technology could facilitate this data-sharing by creating immutable databases that both enforcers and firms can trust.

Finally, computational techniques can help agencies learn from past decisions and design new approaches based on those lessons. Computational models can help agencies analyze the impact of different enforcement mechanisms, understand dynamics in specific industries, and estimate consumer savings from different policy approaches. Agencies can also use these tools to systematically audit the effectiveness of their own internal processes.

Robust antitrust enforcement is essential to promoting resilient, competitive markets, and to making sure that all market participants can compete on a level playing field. Technology has revolutionized the way by which firms do business throughout the world. The Stanford Computational Antitrust Project is dedicated to ensuring that antitrust enforcement can keep up with the rapid pace of chance.

Maura Carey

Stanford Computational Antitrust Project

Academic Outreach Chair of the Stanford Computational Antitrust Project and member of the Stanford Law School Class of 2023.