chevron-down Created with Sketch Beta.

ARTICLE

The Human Factor versus Artificial Intelligence: Who Calls the Shots in Sports?

Misha Solodovnikov

Summary

  • The artificial intelligence system adopted by the Olympics is set to “call the shots” as the current sports technology industry focuses mainly on broadcasting professional sports and providing data for professional athletes, primarily through data annotation and sensor-based input. Those underlying technologies are advancing toward automation, incorporating computer vision and AI.
  • If athletes or national teams even voice concerns regarding use of AI in the Olympics, it will present a difficult legal question.
  • Read more to learn about AI and the upcoming (and future) Olympic Games.
The Human Factor versus Artificial Intelligence: Who Calls the Shots in Sports?
Dean Mouhtaropoulos via Getty Images

No joke, the human factor can cost an athlete a medal or title in sporting events. From referees making incorrect or controversial calls to incorrect statistics, to politics and the media industry who are inclined to make sports a spicy “apple of discord.”

The model and pitch for artificial intelligence (AI) to enter in the sports world are in AI’s ability to be blemish-free. And maybe even “politics-free.”

The first step is done. Olympic organizers told the world about their strategy to use AI in sports. The International Olympic Committee outlined its agenda for taking advantage of AI. Officials said it could be used to make the games fairer by improving judging.  

The artificial intelligence system adopted by the Olympics is set to “call the shots” as the current sports technology industry focuses mainly on broadcasting professional sports and providing data for professional athletes, primarily through data annotation and sensor-based input. Those underlying technologies are advancing toward automation, incorporating computer vision and AI.

With the 2024 Paris Olympics around the corner, the International Olympic Committee has introduced its AI implementation plan. With computer vision and AI at the referee chair, this isn’t just a tech upgrade—it’s a hotbed of controversy. As the world watches, a question looms: Will it establish a precedent for the entire sports industry? Will all countries embrace it?

Pushback was immediate, specifically in response to the video surveillance system equipped with AI-powered cameras designed to identify potential security risks such as abandoned packages or crowd surges. Digital watchdog groups have swiftly voiced concerns because even a temporary proposal to legalize smart surveillance systems could infringe on privacy, although the French government asserts that the systems will not use facial recognition technology. As it turns out, the real issue is not whether we can trust this technology, but whether everyone will trust it.

This situation presents numerous legal challenges for the most innovative lawyers worldwide, as every country has differing laws on AI and varying regulations, while many lack any legal framework to reference.

In the United States, public confidence and privacy rights take precedence. Meanwhile, the European Union, the United Kingdom, and each country participating in the Olympic movement adopt a unique approach.

For example, in the U.S., according to the Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, issued on December 3, 2020, the standard is set to “use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties . . . consistent with applicable law and the goals of Executive Order 13859 [Maintaining American Leadership in Artificial Intelligence (Feb. 11, 2019)].”

AI can become a political issue if athletes won’t trust it. The lack of global standards for AI governance leads to fragmented approaches, making international cooperation difficult. The varying impacts of AI in different socioeconomic contexts also complicate the establishment of universal standards for sports.

For instance, UNESCO has highlighted three significant concerns that could challenge Olympic results determined by AI. First, there is a lack of transparency in AI tools, as decisions made by AI are not always understandable to humans. Second, AI is not inherently neutral: Decisions influenced by AI can be prone to inaccuracies and discriminatory outcomes due to embedded or deliberately inserted biases. Lastly, the methods used to gather data raise issues regarding surveillance practices and privacy protections.

If athletes or national teams even voice concerns regarding use of AI in the Olympics, it will present a difficult legal question. The International Olympic Committee has traditionally portrayed the Olympic Games—and sports in general—as entities distinct from politics. According to its latest guidelines, a “fundamental principle” of the games is the neutrality of sport. It emphasizes that athletes’ expressions within Olympic venues—whether on the field of play during competitions or at official ceremonies—“may distract the focus from the celebration of athletes’ sporting performances.”

Athletes are not the only ones who might be concerned. Let’s address the proverbial elephant in the room: Are judges worried that AI systems might replace them?

The AI judging system was not designed to replace human judges—instead, it was introduced to assist in reviewing routines in cases of inquiries or unclear results. The International Gymnastics Federation first employed the Judging Support System for the pommel horse, rings, and vault at the 2019 World Championships, subsequently expanding its use to additional events at various competitions each year.

But how does an AI system make those decisions?

According to the European Commission’s summary of the European Union’s Ethics Guidelines for Trustworthy AI, “[h]uman agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.”

Over the past three decades, sports judges have increasingly turned to video review technology to tackle scoring disputes. Yet, the demand for a more precise system—one that could spot errors invisible to the human eye—remained unmet. Human judges, after all, are only human; they might overlook minute details critical to scoring, such as a gymnast’s split falling a mere one or two degrees short or a dismount veering off axis by just a few degrees.

The integration of AI, however, opens a Pandora’s box of ethical quandaries. Among these concerns are privacy and surveillance: the extent to which AI technologies monitor athletes and collect sensitive data. Then there’s the issue of bias and discrimination, as AI systems have the potential to perpetuate existing prejudices embedded within their algorithms.

Another significant legal problem lies in the question of AI system ownership. If the AI system is owned by a particular state or a company closely associated with a specific nation, let’s call it nation “A,” there are concerns about potential biases. For example, if nation “A” has strained relations or is at war with another country, nation “B,” there is a risk that the AI system might exhibit bias against athletes from nation “B.” This bias could manifest in unfair treatment, discriminatory decisions, or skewed evaluations that negatively affect athletes from nation “B.” The ownership and national affiliation of the AI system raise important questions about impartiality, fairness, and the potential for geopolitical influences to affect the integrity of the system’s outputs. Addressing these concerns is crucial to ensure that AI systems are fair and unbiased, regardless of the national or political context.

AI relies on human-derived data and sources, which often include biases and prejudices. When these biases are present, AI can magnify and perpetuate them, potentially leading to legal consequences. It is well known that biased AI design can spread these prejudices. Lawyers must recognize and guard against such computer-generated bias to provide protection. Alan Turing, the English mathematician and computer scientist, famously wrote, “Machines take me by surprise with great frequency.”

What is the role of human judgment in the age of artificial intelligence? Do we trust our instincts and intellect, or do we rely more on our sophisticated creations? As Francis Bacon once wrote, “Age appears to be best in four things: Old wood best to burn, old wine to drink, old friends to trust, and old authors to read.”

    Author