This situation presents numerous legal challenges for the most innovative lawyers worldwide, as every country has differing laws on AI and varying regulations, while many lack any legal framework to reference.
In the United States, public confidence and privacy rights take precedence. Meanwhile, the European Union, the United Kingdom, and each country participating in the Olympic movement adopt a unique approach.
For example, in the U.S., according to the Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, issued on December 3, 2020, the standard is set to “use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties . . . consistent with applicable law and the goals of Executive Order 13859 [Maintaining American Leadership in Artificial Intelligence (Feb. 11, 2019)].”
AI can become a political issue if athletes won’t trust it. The lack of global standards for AI governance leads to fragmented approaches, making international cooperation difficult. The varying impacts of AI in different socioeconomic contexts also complicate the establishment of universal standards for sports.
For instance, UNESCO has highlighted three significant concerns that could challenge Olympic results determined by AI. First, there is a lack of transparency in AI tools, as decisions made by AI are not always understandable to humans. Second, AI is not inherently neutral: Decisions influenced by AI can be prone to inaccuracies and discriminatory outcomes due to embedded or deliberately inserted biases. Lastly, the methods used to gather data raise issues regarding surveillance practices and privacy protections.
If athletes or national teams even voice concerns regarding use of AI in the Olympics, it will present a difficult legal question. The International Olympic Committee has traditionally portrayed the Olympic Games—and sports in general—as entities distinct from politics. According to its latest guidelines, a “fundamental principle” of the games is the neutrality of sport. It emphasizes that athletes’ expressions within Olympic venues—whether on the field of play during competitions or at official ceremonies—“may distract the focus from the celebration of athletes’ sporting performances.”
Athletes are not the only ones who might be concerned. Let’s address the proverbial elephant in the room: Are judges worried that AI systems might replace them?
The AI judging system was not designed to replace human judges—instead, it was introduced to assist in reviewing routines in cases of inquiries or unclear results. The International Gymnastics Federation first employed the Judging Support System for the pommel horse, rings, and vault at the 2019 World Championships, subsequently expanding its use to additional events at various competitions each year.
But how does an AI system make those decisions?
According to the European Commission’s summary of the European Union’s Ethics Guidelines for Trustworthy AI, “[h]uman agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.”
Over the past three decades, sports judges have increasingly turned to video review technology to tackle scoring disputes. Yet, the demand for a more precise system—one that could spot errors invisible to the human eye—remained unmet. Human judges, after all, are only human; they might overlook minute details critical to scoring, such as a gymnast’s split falling a mere one or two degrees short or a dismount veering off axis by just a few degrees.
The integration of AI, however, opens a Pandora’s box of ethical quandaries. Among these concerns are privacy and surveillance: the extent to which AI technologies monitor athletes and collect sensitive data. Then there’s the issue of bias and discrimination, as AI systems have the potential to perpetuate existing prejudices embedded within their algorithms.
Another significant legal problem lies in the question of AI system ownership. If the AI system is owned by a particular state or a company closely associated with a specific nation, let’s call it nation “A,” there are concerns about potential biases. For example, if nation “A” has strained relations or is at war with another country, nation “B,” there is a risk that the AI system might exhibit bias against athletes from nation “B.” This bias could manifest in unfair treatment, discriminatory decisions, or skewed evaluations that negatively affect athletes from nation “B.” The ownership and national affiliation of the AI system raise important questions about impartiality, fairness, and the potential for geopolitical influences to affect the integrity of the system’s outputs. Addressing these concerns is crucial to ensure that AI systems are fair and unbiased, regardless of the national or political context.
AI relies on human-derived data and sources, which often include biases and prejudices. When these biases are present, AI can magnify and perpetuate them, potentially leading to legal consequences. It is well known that biased AI design can spread these prejudices. Lawyers must recognize and guard against such computer-generated bias to provide protection. Alan Turing, the English mathematician and computer scientist, famously wrote, “Machines take me by surprise with great frequency.”
What is the role of human judgment in the age of artificial intelligence? Do we trust our instincts and intellect, or do we rely more on our sophisticated creations? As Francis Bacon once wrote, “Age appears to be best in four things: Old wood best to burn, old wine to drink, old friends to trust, and old authors to read.”