chevron-down Created with Sketch Beta.

ARTICLE

Artificial Intelligence, Big Tech and Regulation: Connecting the Dots

Andreia Saad

Artificial Intelligence, Big Tech and Regulation: Connecting the Dots
jayk7 via Getty Images

Summary

The development and deployment of artificial intelligence have been following the growing pace of digitalization in Brazil. Along with the recognition of the importance of AI, there also came the insurmountable conclusion that it needs to be regulated. There are several bills currently being discussed at the Brazilian Congress. The European experience has been used as a reference to support a risk-based regulatory approach, under which different obligations are imposed on providers and users of artificial intelligence according to the level of risk raised by AI’s different purposes. This article aims to demonstrate the importance of broadening the debates to consider the risks raised not only by the uses of artificial intelligence, but also by the characteristics of the agents who provide and use it, dedicating a special look to large platforms. Such an approach is consistent with the European experience and necessary for Brazil to assertively face the negative effects that may arise from the application of artificial intelligence by Big Tech. The prevention and remediation of these effects will be especially welcome in Brazil’s current context, stimulating competition in the already concentrated digital markets, safeguarding vulnerable populations against exploitation and manipulation, and protecting our fragile democracy, the strength of which depends more than ever on citizens that are well informed and fully capable of debate

1. Introduction

On June 11, 2022, Blake Lemoine, an engineer at Google, became world famous when he claimed, in an interview with the Washington Post, that the LaMDA system, created by the company to develop chatbots, had acquired self-awareness. To support his claim, Lemoine released excerpts from dialogues that he and a colleague would have had with the aforementioned system, highlighting responses that supposedly showed that LaMDA would have a clear perception of its own existence, needs and rights—something that, in the eyes of the engineer, would make the system no longer a simple computer program but a “person.”

Google reacted quickly, not only denying the allegations made by Lemoine, but also removing and ultimately firing the employee. The measures, however, were not enough to prevent what happened from gaining huge repercussions in the media. And, in fact, there is no doubt that the story is very interesting, starting with Lemoine himself, a unique character self-described as a father, engineer, priest, veteran and ex-convict. But the insistent references to Isaac Asimov in the articles that have been published since that June on the subject show that people's interest on the topic of artificial intelligence (AI) is still essentially driven by the seductive fantasy of science fiction books and films.

It is clear that the exploration of this almost imaginative idea that humanity could at some point be subjugated by technology makes good click-bait. But it also distorts the discussion and makes us forget what really matters: the scope, benefits and threats of AI that are already embedded in the tools we use in our daily lives, from search engines to social media and messaging apps. Certainly, this everyday AI seems much less scary than the one in Asimov's stories or Lemoine's claims. But it is the use of this AI that, if neglected by academia, authorities and society, can pose an effective risk to competition, democracy and individual freedoms, especially if freely applied by companies with great market power and enormous reach in terms of number of users (here called “Big Tech”).

It is precisely this perspective—the application of AI by Big Tech—that this article aims to explore, addressing its repercussions on competition and more broadly on society, ultimately discussing which instruments can be used to deal with such repercussions, considering the Brazilian regulatory context.

2. AI: defining—and humanizing—the concept

AI, machine learning, algorithms—expressions that until a few years ago were confined to the circles of experts in computer science—are now terms frequently used in the media to address various topics in the field of technology. Those concepts, however, seem to have not yet come down from their old academic pedestal: they remain ethereal, being used vaguely even in public debates on the subject—proof that, despite their increasingly recurrent use, little is known about what they actually mean.

Without going into excessively technical intricacies, but seeking sufficient clarity to delimit the scope of the debate that is intended to be explored here in this article, an algorithm can be defined as a finite and unambiguous sequence of instructions that aim to produce results (output) from a given set of elements (input). Algorithms, as a concept, are nothing extraordinary: culinary recipes are an example of an algorithm, as they provide instructions for producing a result—the dish—from a certain set of elements—the ingredients. Algorithms are not new either: there are records of algorithms being used to solve mathematical equations dating back to between 3,000–1,500 BC, still in Babylon.

It turns out that, in the digital world, algorithms are translated into a language understandable by the computer, which gives them unparalleled processing capacity and speed; its input is data, that today arrives in increasingly greater quantity and quality. Relying on these elements, algorithms allow humans to be more ambitious than ever, being used for an astonishing and diverse range of uses—from identifying faces in photos to detecting pests in crops.

The evolution of the concept of an algorithm leads to the idea of AI. AI can be understood as a set of algorithms configured on the basis of machine learning techniques. Machine Learning, in turn, is defined as the science of making computers have the ability to learn from data.

The applications of AI—and the uses and benefits generated to society, therefore—are multiple: online search tools, credit analysis and fraud detection in the banking system, monitoring vehicle traffic, analysis of medical exams to identify risks to health, space exploration, operation of autonomous vehicles, prediction of natural disasters, among many other examples.

Despite this configuration based on machine learning, the most fundamental aspects of AI are essentially human: humans are the ones who create and define the budget for AI development, decide on the data to be used, test and calibrate the functioning of AI, and determine when to put it into operation in real life. More than anything, humans are the ones who determine, at the origin, the reason why and the purpose for which those AIs are developed and applied.

Disregarding the relevance of the human element in the existence and application of AI and maintaining its concept as a mystical plan, in addition to feeding fantasies of apocalyptical robot uprisings, stimulates discourses to the effect that AI cannot and should not be regulated at all. The most frequent arguments are those already common in anti-regulation discourses defended in many other technological contexts: there is no way to control technology; and rules may prevent or hinder innovation.

It turns out that, if, on the one hand, AI can bring ever greater benefits to society, providing it with more comfort, knowledge, entertainment, health and safety through its countless applications, on the other, it raises risks proportional to its growing relevance. Examples of traditionally mentioned risks include discrimination against candidates in recruitment processes, the making of mistaken pathological diagnoses and the conviction of people for crimes unfairly attributed to them in investigations carried out with the support of this technology.

In view of those risks, a number of jurisdictions today acknowledge that AI needs to be regulated. Led by the European Union, countries today are debating ways to effectively regulate AI, some of them already relying on actual bills of law. This is the case of Brazil, and the topic of our next section.

3. AI in Brazil: regulatory scenario

The development and application of AI are flourishing in Brazil. IDC consultancy estimates that USD 504 million were allocated to investments in AI and machine learning in the country in 2022, a growth of 28% compared to the previous year. And although Brazil still occupies the 39th position in the global ranking of countries with the greatest capacity to develop AI technologies, it is the best placed Latin American country on the list—and the prospects are favorable for its continued growth. In fact, the country stands out for its growing data production, due to the increase in the number of internet users (a 100% growth in the last 10 years), daily time spent on the internet (9 hours per day) and the spread of smartphones (today there are 1.1 devices per inhabitant). Also due to its high degree of digitalization, Brazil is already a heavy user of AI-based systems: with a reported 159 million Brazilians accessing social media daily.

The publication of the Brazilian Strategy for Digital Transformation showed that, as early as 2018, the federal government intended to dedicate itself to the topic of AI. And, in fact, in line with the growing relevance of AI in the country, in 2021, the Brazilian Artificial Intelligence Strategy (EBIA) was published. The EBIA has 9 sections, one for each thematic axis of AI, such as governance and application in productive sectors. In the section referring to the regulatory axis, EBIA admits that AI brings risks, but is cautious about the convenience of its regulation, stating that “it is necessary to deepen the study of the impacts of AI in different sectors, avoiding regulatory actions that could unnecessarily limit AI innovation, adoption and development.”

Despite the fears expressed in EBIA, three Bills are being processed jointly in the Senate, seeking to regulate the development and application of AI in the country: PL 5,051/2019, 21/2020 and 872/2021. The three Bills have similar content, establishing foundations, objectives and guidelines for the use of AI in the country. Those Bills do not innovate, however: they repeat essential principles already agreed upon on the international scene to guide the use of AI, such as the contribution to sustainable development and the guarantee of transparency. Furthermore, they are programmatic in nature, not imposing practical duties, not designating any authority to monitor the topic nor predicting consequences in cases of violation.

Not surprisingly, the Bills received criticism for their lack of effectiveness. In this context, and recognizing the complexity of the topic, the President of the Senate installed a Committee of Jurists to support the preparation of the replacement for the aforementioned Bills. This work gave rise to a new Bill of Law, No. 2,338/2023, which is now currently under the analysis of a special commission at the Brazilian Senate.

What is observed in these and other discussion forums on AI regulation in Brazil, however, is that the analysis on risks and benefits of AI continue to be centered on the technology itself, irrespective of the subjects who develop and use it to provide their products and services in the market. Thus, the purposes of each type of AI—e.g. recruitment, biometric identification of people—continue to be the primary criterion to guide the debate on the risks posed by the technology and the discussion about the level of regulation that needs to be adopted in this context. There is no reflection on whether and how the characteristics of the subjects who provide and use AI for the development of its products and services—e.g. market power, reach in number of users—can create new risks or worsen those already identified in the context of analyzing the different purposes of AI. And this can make all the difference in the form and scope of regulation currently being debated in the country.

4. AI and Big Tech

Digital markets have peculiarities that, together, tend to hinder the process of competition between their players. Elements such as network effects, high switching costs and economies of scale and scope increase barriers to entry for new players and contribute to an increasing concentration in this industry.

It is for such reasons that key markets in the digital economy are already monopolized or oligopolistic. The main companies operating in this industry today are known as Big Tech. For the purposes of this article, Big Tech can be defined as companies that have their businesses based on the digital economy, have a relevant part of their activities carried out through online platforms, tend towards verticalization, dominate their respective niches, have global operations, have billions of users, and operate essentially based on innovation and the use of AI and massive data collection. The most cited examples are Google, Facebook, Amazon, Apple, Twitter, and Tik Tok, but there are others, such as Kuaishou, Snapchat and Pinterest, whose reach varies depending on the country. The largest digital companies are popularly known by the acronym GAFA, a term formed by combining the initials of Google, Apple, Facebook and Amazon and which has wide use in academia and the media.

It is estimated that the GAFA quartet alone invests between USD 18 and 42 billion in research and development per year. At first glance, it may seem quite desirable that companies with this financial availability and their well-known qualified human capital are dedicated to the advancement of AI, considering so many positive effects that this technology is capable of bringing. It turns out that, in an industry that is already so susceptible to concentration, here AI can also become fuel for deepening dominance and a tool for engaging in conduct that is harmful to society.

AI as an instrument for deepening one’s dominance

As mentioned, AI has data as its input: they are the input from which the machine learns to recognize patterns, relationships and trends and it will be based on this learning that the machine will gain autonomy to fulfill the purposes assigned to it, from recognizing faces in photos, providing personalized content recommendations on social media platforms, to identifying fraud attempts. The greater the volume and better quality of the data used as input, the more accurate the AI will be and the more useful it will be in improving, personalizing and reducing the costs of products and services offered by companies in the market.

And who are those who have the easiest access to data? Big Tech, precisely, through their platforms. For example, Google concentrates more than 92% of searches made on online search tools in the world; and Facebook has 2.93 billion monthly active users. Thus, by dominating the segments in which they operate and reaching a greater number of users than their competitors, Big Tech are able to collect greater and better data. They are also the ones who have the greatest access to financial resources, whether to develop AI organically or to acquire it through mergers and acquisitions of start-ups. And highly skilled AI enables increasingly better and more personalized products and services.

Thus, a perverse cycle of perpetuation of the dominance of Big Tech is formed, with AI ceasing to be a mere competitive advantage and becoming a true barrier to entry, driving away new competitors due to their inability to access such quantity and quality of data and financial resources. The tendency for this scenario to persist is already recognized in the United States and Europe, where the authorities, upon noting the low probability of challenging this dominance of Big Tech, are already considering structural remedies to restore competition in digital markets.

In addition to this potential to perpetuate the already present and deeply rooted dominance of Big Tech, AI raises the risk of becoming a tool for carrying out conduct that could harm competition and, on a large scale, consumers and society in general.

AI as a tool for abusing one’s dominant position

Within the antitrust literature, the debate about the anticompetitive potential of AI has focused on its ability to enable collusion. But AI can also be useful for abusing one’s dominant position. In 2017, for example, Google was fined in EUR 2.42 billion by the European Commission for using algorithms in order to favor its own price comparison service in the ranking presented on the results page of its search engine, harming competitors. In 2019, the European Commission began an investigation against Amazon to verify whether the company was manipulating its algorithm for selecting products that would appear in the so-called “buy box” in order to favor its own products, to the detriment of other retailers.

The anticompetitive potential of these AI-based recommendation and ranking tools does not depend on market verticalization, however. Still in 2019, for example, the Wall Street Journal alleged that Amazon was manipulating the algorithms that list products in response to user searches on its website to also favor certain third-party products, as they were those from which Amazon was able to extract greater margins, generating greater profit for the company.

AI as a tool for harming freedom of expression and other individual rights

The pluralism of voices is a value intrinsically linked to the democratic principle. The diversity of information sources allows people to become aware of the countless political, ideological and philosophical conceptions that exist in society, giving them a repertoire to evaluate the issues under discussion in the public arena, instruction to assume the responsibilities inherent to the exercise of popular sovereignty and preparation to enjoy their fundamental rights. Thus, being considered as legitimate sources of news, large social media platforms now play an essential role in enabling access to information by citizens, helping to implement the principle of pluralism and the consolidation of democratic regimes.

However, by applying a biased AI (one which recommends only the most popular content, prioritizes the insertion of content in the feed that simply generates more user engagement, regardless of the type of message they carry, censors many other contents due to alleged violation of internal and non-transparent terms of use, or that surrounds the users exclusively with content aligned with their own views), these large platforms end up calling into question the pluralism of voices, limiting it and shaping it in a way that stimulates polarization, propagates hate speech, disseminates fake news and creates phenomena such as the filter bubble and the echo chamber. This scenario is aggravated by the fact that, as in all other cases in which AI is used to the detriment of the interests of individuals, there is no transparency regarding the functioning of the algorithms, which means that people do not even realize the need to seek other sources of information outside the scope of the platforms.

Finally, there are the risks raised by AI that directly affect people in their individuality—their purchasing power, their freedom of decision, their privacy and, ultimately, their dignity. AI can be used in harmful ways, e.g., to manipulate consumer choices contrary to their interests, collect their personal data in non-transparent ways and make them social media addicts. Such harms do not depend on the characteristics of the agent to materialize. However, they gain magnitude and severity when the agent reaches a significant number of people, causing a systemic effect, which is also a justification for Big Tech to receive special attention in the context of AI regulation.

5. The search for a regulatory approach on AI and Big Tech in Brazil

The scenario described provokes reflection on the need for and the convenience of regulation that specifically addresses the application of AI by Big Tech in the country. As in the rest of the world, such platforms also have a very relevant role in the Brazilian market: Google, for example, holds more than 90% of the online search market in Brazil; Facebook is accessed by more than 50% of the country's population every month; increasingly, large social media platforms are considered by Brazilians as relevant sources of information. In this context, it is clear that Brazil is also subject to the harmful risks of an unregulated use of AI by these companies.

The effects of the application of AI by Big Tech have already been evaluated by Brazilian antitrust authorities. Google, for example, has already been investigated for having allegedly engaged in self-favoring in its search tool and scraping content from competing websites for use in its own services. Both cases, however, were closed due to lack of evidence.

In fact, given the opacity of algorithms and the lack of transparency regarding their decision-making criteria, proving that the use of AI has harmed competition is not a simple task. The production of evidence requires information and technical evaluation capacity that is not always available to antitrust authorities and makes it difficult for them to act more assertively in this field. Furthermore (and also as a result of this difficulty), the action of these authorities to deal with conduct involving AI tends to take longer, which may mean that the remedy to restore competition, if and when applied, will come too late, especially considering the dynamism of digital markets.

It must also be considered that, as discussed, the harmful effects of the use of AI by Big Tech are not limited to the competitive sphere, impacting pluralism and giving a systemic dimension to the harm caused to people individually, making violations of rights such as privacy, equity and freedom of choice even more serious. Such issues are generally not addressed by antitrust enforcement and may remain as blind spots or, under the jurisdiction of other authorities such as consumer authorities, face the same difficulties in detection and production of evidence already faced by competition authorities.

In this context, the option of an ex-ante approach indeed seems to be ideal for dealing with the use of AI by Big Tech. However, as seen, some of the bills currently under discussion in the Brazilian Congress to regulate AI are programmatic in nature and discussions around alternative approaches still focus on the risks of AI raised by its uses only, following the European Union's regulation proposal.

In fact, according to the European regulatory proposal, the risks of AI vary depending on its use, which may pose: (i) unacceptable risk; (ii) high risk or (iii) low risk, depending on the degree of threat that this use poses to the health, safety or fundamental rights of individuals. The use of AI, e.g., for real-time remote biometric identification of people in public spaces generates a risk considered unacceptable, and is, therefore, strictly prohibited; the application of AI to calculate credit scores is considered high risk and, therefore, its providers and users need to comply with a series of special requirements related to transparency, risk management, data governance, and technical documentation, among others.

What must be observed, however, is that the European proposal in question, when in force, will not be isolated. It will be complemented by other relevant legal documents, including two that recently came to revolutionize digital markets but whose existence is frequently neglected in the discussions about the regulation of AI that are taking place in Brazil. The first is the Digital Markets Act (DMA), which seeks to promote competition in digital markets, and the second is the Digital Services Act (DSA), which establishes rules for services and content offered by internet platforms.

Both the DMA and the DSA are especially concerned with Big Tech, making clear their capacity to affect competition and cause harm to society. Thus, based on the concepts of “gatekeepers” in the case of the DMA and “very large online platforms – VLOP” in the case of the DSA, these laws seek to mitigate the risks raised by the actions of these players by imposing a series of specific restrictions and obligations on them, including with regard to AI and its use.

The DMA, for example, prohibits gatekeepers from using their ranking systems to favor their own products and services and requires them to comply with transparency obligations regarding ranking criteria, thereby assertively confronting and remedying one of the most relevant forms of use of AI for the purpose of abusing a dominant position. Furthermore, the DMA establishes a series of rules that seek to enable and facilitate the sharing of gatekeepers' data with competitors, which also alleviates the issue of using AI as an instrument to deepen the dominance of Big Tech.

The DSA, in turn, determines that VLOPs annually identify and analyze systemic risks arising from their services, requiring special attention to AI-based content moderation and recommendation systems—precisely the ones that, as seen, threaten free competition, the plurality of voices, privacy and human dignity by potentially limiting users' access to content, processing their personal data without transparency or manipulating their behavior. The DSA gives due direction to the issue by requiring that, once these risks have been identified, VLOPs must take effective mitigation measures, including adapting the AI system itself (by changing its decision-making process) and/or its level of transparency (by changing the respective terms and conditions).

Furthermore, specifically regarding content recommendation systems, the DSA requires that its terms and conditions identify the parameters used to operate the system and that VLOPs grant users the option to change them, giving them the alternative to choose a configuration that is not based on reading your profile. Such autonomy given to users opposes the power of large platforms and supports the effort to mitigate the risks raised by the application of AI by VLOP.

What is clear, therefore, is that the application of AI by Big Tech is a cause for concern and requires a specific regulatory approach also in Brazil. In fact, the country remains susceptible to its deleterious effects and the few situations investigated by CADE could not be proven, resulting in the dismissal of the cases. Still, as explained above, there are risks arising from the application of AI by Big Tech that go well beyond the competition matters and are particularly serious in Brazil: in addition to a population vulnerable to strategies of manipulation and exploitation allowed by AI-based systems, given Brazilians’ limited access to education, Brazil’s democracy has been seriously challenged by increasingly polarized politics and was under direct attack on January 2023, when crowds of right-wing extremist protesters, fueled by disinformation spread on main social media platforms, invaded and vandalized the country’s main democratic institutions in its capital, Brasília. Under this scenario, Brazil cannot afford to leave the access to information to continue being limited and manipulated through the use of AI by Big Tech, at the risk of further weakening its already fragile democracy.

At this point, and considering that Brazil does not have laws that are similar to the DMA or DSA, debates on the regulation of AI in the country prove to be a timely and convenient forum for considering the risks of the application of AI by Big Tech, highlighting the relevance of the topic and generating subsidies for its inclusion in the Bills of Law currently under discussion in the Brazilian Congress.

6. Conclusion

The development and deployment of artificial intelligence have been following the growing pace of digitalization in Brazil. Along with the recognition of the growing importance of the topic, also came the acknowledged need for its regulation. Multiple bills are currently being discussed at the Brazilian Congress. The European experience has been used as a reference to support a risk-based regulatory approach, under which different obligations are imposed on providers and users of artificial intelligence according to the level of risk raised by AI’s different purposes. This article intended to demonstrate the importance of expanding the debates to also consider the risks raised by the characteristics of the agents that provide and use it, dedicating a special look to large platforms. Such an approach is consistent with the European experience and necessary for Brazil to assertively face the negative effects that may arise from the application of AI by Big Tech. The prevention and remediation of these effects will be especially welcome in the country's current context, stimulating competition in the already concentrated digital markets, protecting the population against exploitation and manipulation and protecting its increasingly fragile democracy, which depends more than ever on citizens that are well informed and fully capable of debate in order to strengthen itself.

    Author