chevron-down Created with Sketch Beta.

ARTICLE

Webinar Recap: Machine Learning and Antitrust

Masha Melkonyan

Summary

  • Despite growing concern in the literature, there have been no actual cases challenging algorithmic collusion, suggesting that either it is not occuring or it is occuring but has not been detected.
  • There are three “candidates” that could face liability for algorithmic collusion: (1) the party using the AI, (2) the party designing the AI, or (3) the AI.
  • Currently, there are many active studies developing ways to ensure that AI does not collude -- a field of research called Compliance by Design.
  • Blockchains’ distinguishing feature of immutability (meaning once information about past or future is given, it is held) can also help companies collude.
  • An example of how large-scale this issue might become is evident in Leibowitz v. Ifinex Inc., where damages are estimated to be approximately $1.5 trillion.
     
Webinar Recap: Machine Learning and Antitrust
sefa ozel via Getty Images

Introduction

On March 4, 2022, the ABA Section of Antitrust Law hosted a webinar titled Machine Learning and Antitrust. The event was moderated by Dr. Jéssica Dutra, Associate Director at Secretariat Economists, and presented by Dr. Ai Deng, Principal at Charles River Associates’ Antitrust and Competition Practice and a lecturer at Johns Hopkins University, and Dr. Thibault Schrepel, Associate Professor of Law at Vrije Universiteit Amsterdam and a Faculty Affiliate at Stanford University CodeX Center.

Dr. Deng provided an introduction to machine learning and related concepts. His introduction was followed by Dr. Schrepel’s introduction to algorithmic collusion and blockchains in antitrust, followed by a panel discussion of these and related topics.

What Is Machine Learning?

Dr. Deng first explained the concept of machine learning by comparing the process with human learning. Machines mimic our learning methods through three main paths: (i) examples, known in the literature as “supervised learning”; (ii) difference; and (iii) trial and error, known in the literature as “reinforcement learning.”

For instance, pictures can serve as examples in the human learning process: once we construct a mental image of, say, a horse, when we are shown an image of a purple horse, our brains can easily recognize the animal as a horse despite its unusual color. Machine learning through examples can be achieved with regression (e.g. artificial neural network (ANN), deep learning, deep neural network), where data acts as the “example,” – i.e. as the training data that is then applied to the testing data.

Trial and error is another basic learning technique that humans use growing up and throughout our lives in general. For example, looking for clear cell phone signals in an unfamiliar location, a person will move around in order to find good service and will stay there. This would be classified as positive reinforcement. In machine learning this technique translates into one of the most relevant bases for artificial intelligence (AI) algorithms, which reinforces learning without human intervention.

Algorithmic Collusion and Blockchains

Dr. Schrepel discussed several key antitrust topics related to machine learning. The first one was algorithmic collusion which has raised growing interest and concern. This refers to the process of how two machines “decide” to collude independently from humans. According to Google Scholar, 177 papers have mentioned this subject in 2021. On the other hand, there are no actual cases facing algorithmic collusion anywhere across the globe. Dr. Schrepel explained that this means that either there is, indeed, no issue with algorithmic collusion, or the issue has not been detected despite its presence.

One subject that is currently not discussed as widely as algorithmic collusion - possibly because of lack of public enforcement against it - is the issue surrounding blockchains. According to Dr. Schrepel, blockchains’ distinguishing feature of immutability (meaning once information about past or future is given, it is held) can help companies collude. This is because if algorithms on blockchains are set up to implement collusion, they will certainly do so. This becomes worrisome when cooperation is achieved in collusion—a non-cooperative game by nature. An example of how large-scale this issue might become is evident in Leibowitz v. Ifinex Inc. (2019). In this antitrust case damages are estimated to be approximately $1.5 trillion. The case concerns just one particle blockchain called a tether. This value is an illustration of the extent and significance that blockchains can have in antitrust.

Can AI Collude and Who Would Be Liable for the Collusion? Is Machine Learning a Black Box?

Dr. Deng explained that in recent years, there has been growing experimental evidence indicating that AI can collude. The presenters agreed that there are three “candidates” that could face liability of collusion: (1) the party using the AI, (2) the party designing the AI, or (3) the AI. Currently, there are many active studies into developing ways to ensure that AI does not collude. This field of research is called Compliance by Design. However, both presenters agreed that, in many ways, machine learning is still a black box. Dr. Deng mentioned Explainable Artificial Intelligence (XAI), a field of research aimed at unfolding the contents of that black box. Dr. Deng similarly noted that even though at this present moment we lack familiarity with what is going on inside the black box, we do observe its outputs. Dr. Schrepel agreed that it makes sense for agencies to first concentrate on this observable space and only then move on to the deeper layers. Similarly, it is also useful to look at counterfactuals, which would be human intelligence, as opposed to artificial. However, in this case the counterfactual in the form of our brains is a rather sophisticated structure and yet another black box.

Finally, there panelists discussed how people may be able to control just how much is hidden in the black box by putting limitations on the AI. For example, assuming AI’s “mission” is to choose a price, the following restrictions can be applied: the AI can output a price recommendation, however, a human will be in charge of accepting or rejecting it (“the boxing method”); we can limit the cognitive capabilities of an AI by having a human be in charge of telling the AI exactly how to decide on a price and, thus, restricting the AI from doing anything that it was not set up to perform (“stunting”); or we can apply specific rules to the AI - e.g., never choose the same price as the competitor regardless of how high or how low of a utility it can result in (“motivation selection method”). One more approach, which is not practiced yet but may be implemented in the future, is called “indirect normativity.” Under this approach, we train the AI with, for example, a series of decisions made by the Federal Trade Commission, teaching the AI never to violate antitrust laws.

Takeaways

Machine learning is becoming increasingly important in a variety of fields, including antitrust. Researchers are putting a great amount of effort into thoroughly studying various aspects of it. As a result, machine learning is changing rapidly and it is fascinating to be able to observe it as it develops.

This article prepared by the Antitrust Law Section's Economics Committee.

    Authors