AI system to protect athletes from online abuse during Paris 2024

A new AI-powered monitoring service will protect athletes and officials from online abuse at both the Paris 2024 Olympics and Paralympic Games. This will mark the first time that AI will be in use to provide safe online spaces for such a large number of athletes competing in so many sports at the same time.

The AI-powered system will monitor thousands of accounts on all major social media platforms and in 35+ languages in real time. Any identified threats will be flagged, so that abusive messages can be dealt with effectively by the relevant social media platforms – in many cases before the athlete has even had the chance to see the abuse.

The athletes are at the heart of everything we do at the IOC,” said IOC President Thomas Bach following the announcement of the project. “I know that the athletes have a unique and valuable perspective on how the Games should be organised and on the issues that affect them while competing. I am therefore delighted that the Athletes’ Commission and Medical and Scientific Commission are responding to this feedback through initiatives like the AI system to protect athletes at Paris 2024 from online abuse.”

A joint project developed by the IOC Athletes’ Commission and the IOC Medical and Scientific Commission, the system will not only prioritise the safety and well-being of athletes across the Games, but will also help the IOC better understand the challenges that athletes face in relation to online abuse, enabling it to further enhance athlete protection at future events.

“Athletes can focus on their performance”

The introduction of this new monitoring platform at Paris 2024 is part of the IOC’s ongoing commitment to safe sport and, according to Kirsty Burrows, Head of the Safe Sport Unit at the IOC, highlights how online abuse has become a key challenge affecting society and sport today.

“Sport and social media are inextricably linked. At Paris 2024, we are expecting around half a billion social media posts,” explained Burrows at the launch of the Olympic AI Agenda on 19 April.

“There are so many fantastic opportunities for athlete engagement, but unfortunately online violence is inescapable, particularly when athletes rely on social media for their profile. This is a critical challenge for us because safe sporting environments also have to mean safe digital environments.”

The online monitoring system will be available to cover 15,000 athletes and more than 2,000 officials across the Olympic and Paralympic Games. It will be a key part of what Burrows calls “a package of safeguarding systems” to ensure safe online and offline environments during the Games, with the ultimate goal being to support and promote athletes’ physical and mental health and well-being, so that they can focus on competing in the biggest event of their sporting careers.

“This package of initiatives is designed to try and ensure that the Olympic Games are a safe space,” explains Burrows. “These systems are in place so that the athletes can really focus on their performance, and they know that everything else is taken care of.”

Not only will the platform provide safe online spaces for participants; it will also help the IOC further understand online abuse, and in turn aid the development of strategies to address the issue.
 “This is the first time this solution will be used to protect so many people in so many sports. By utilising AI, we’ll be able to better understand online violence in sport and develop data-driven policies and interventions to help create physically and psychologically safe environments for athletes.”

By successfully protecting athletes at Paris 2024, Burrows hopes this can drive positive change on an even bigger scale.

“Creating a safe space inside a bubble at the Games is one thing, but the whole world is watching, and therefore there can be very high exposure to abuse. The use of AI to protect people from online harm is a great step, but the challenge in this space is that it’s not just an issue that affects sport. We all have to work together to really make a change and to create safer online spaces.”

Successfully piloted during Olympic Esports Week

The AI-powered tool was successfully piloted during Olympic Esports Week, where it monitored targeted, abusive content posted on the social media accounts of players participating in the event. This included identifying slurs, offensive images and emojis or other phrases that could indicate abuse.

It subsequently analysed more than 17,000 public posts, flagging 199 potentially abusive messages from 48 authors targeting accounts from a study set of 122 players and two official IOC accounts. A total of 49 posts were then verified as abusive by a team of experts against an agreed definition of discriminatory abuse and flagged for action via the relevant social media platforms.

The pilot study helped the IOC to understand the size, scale and gravity of the issue of online discriminatory abuse and threats being targeted at athletes publicly on social platforms, and provided a blueprint for ongoing monitoring, analysis, investigation and action

Leave a reply