Curbing toxic players with data and machine learning
A Unity study paints a grim picture of the g@ming community’s propensity for toxicity. It found seven out of 10 players have experienced some form of frequent toxic behavior — nearly half of players say they at least “sometimes” experience toxic behavior, and 21% experience it “every time” or “often.”
Across many types of multiplayer video games and online battle arena games, or other forms of online g@ming, players continuously interact in real-time to either coordinate or compete. Interactivity is integral to gameplay dynamics, but it’s also a prime opening for toxic behavior that manifests in many forms. Another study by the Anti-Defamation League (ADL) found consistency in toxic behavior over the years, showing that for the third consecutive year, harassment experienced by adult gamers remains at an alarmingly high level; five out of six adults (83%) ages 18-45 experience it in online multiplayer games.
As toxicity has become more widespread across the g@ming community, so have its numerous consequences. Data can play a critical role in curbing toxic behavior in g@ming: organizations can take steps to better address toxicity through integrating toxicity detection into games with real-time, scalable architecture. It’s not just about collecting the data — it’s about implementing tools that utilize machine learning to find and then remediate the behavior that has become a key indicator for player churn.
The harsh consequences of toxic behavior
Toxicity in g@ming has many concerning effects. It negatively impacts players’ mental health long-term, and the personal toll that toxic behavior like griefing, cyberbullying, and sexu@l harassment can have on gamers and the community is an issue that cannot be overstated. It also has an acute effect on women. The Unity study mentioned above found that men were more willing to engage in voice chat and other communication tools while playing games, and therefore experienced toxic behavior more often. But women are more likely to avoid online communication tools because of gendered abuse, and those who do participate are more likely to stop playing the game altogether due to toxic behavior.
Leaving the door open for toxic behavior not only takes a personal toll on gamers and the community but also affects brand image: long-term player retention and short-term player retention are both impacted when toxic events occur. The 2020 study from the ADL showed over 80% of players recently experienced toxicity, and of those, 20% reported leaving the game due to these interactions. Players thrive on personalized in-game experiences, and toxic behavior takes away from that positive 1:1 experience that keeps them both entertained and satisfied with gameplay.
Simply put, letting toxic behavior fall to the wayside and not taking a proactive approach to combat it can leave your audience turning elsewhere to find an experience that matches their needs.
How data and machine learning can help curb it
G@ming companies can now more easily integrate toxicity detection into their own games through chat data. While chat data is a natural place to evaluate toxic text language that is being used within a game, more and more g@ming companies are seeing unstructured data sources like voice and image as a more prevalent source for bullying behavior. The complexity of deriving insights from unstructured data is particularly complex and an area where many leading g@ming platforms struggle. Having platforms in place that can ingest any type of data and apply ML/AI capability on top of this data to drive more efficient scale in finding and remediating troubling behavior is now one of the key investment areas for big data in the g@ming ecosystem and many leading platforms.
Implementing a real-time, scalable architecture that detects toxicity creates the ability to filter millions of workloads and simplify workflows for community relationship managers. Severely toxic events can be flagged and addressed in real-time, and leaders can choose an automatic response for such events, like muting players or quickly alerting a customer relationship management (CRM) tool to the incident impacting player retention. A data platform also serves the purpose of monitoring brand perception through processing large data sets from disparate sources and turning them into more digestible reports and dashboards.
Riot Games turned to Apache Spark and Google’s TensorFlow to improve its ability to identify and punish players that use abusive or “toxic” language on in-game chat. Their team used a neural model called Word2Vec to “dig out the language used by players” and understand the meaning based on the context in which it was used. This was a critical first step in building a list of the language it did not want to see in chat. Intel has also worked with Spirit AI’s Ally tech to process human speech, enabling it to flag toxic behavior on both voice chat and live streams. We’re seeing more use cases similar to these as companies continue to realize the severe implications of toxic behavior.
The move to real-time in g@ming
Beyond toxicity in g@ming issues, leaders need to be maintaining a strong focus on real-time tools to combat other problems in g@ming – like cheating. Cheating in games has evolved and is now very advanced, and g@ming community managers need to act immediately to catch cheaters.
To maintain the best g@ming experience, interactions need to happen by the moment, not after the fact. A real-time platform addresses toxicity, but it also allows for extensibility for use cases like cheating and others. At the end of the day, an agile platform drives solutions for all of these use cases. The right tool can save time, meaning whoever manages the community — data scientists, business owners, your head of community management — can spend more hours on creative game innovation.