
Even with increasingly effective moderation tools, the reach of player content and communication expands every day, and online discourse has never been more heated. In this GamesBeat Next panel, Hank Howie, gaming industry evangelist at Modulate, was joined by Alexis Miller, director of product management at Schell Games and Tomer Poran, vice president of solutions strategy at ActiveFence, to talk about the Best practices for moderating gaming communities of all sizes and demographics, from codes of conduct to strategy, technology, and more.
It’s an especially critical conversation as privacy, security and trust regulations become a bigger part of the conversation, Poran said, and AI is becoming more powerful as a tool to not only detect but also create harmful content or toxic, and has become crucial in the evolution of content moderation strategies in gaming spaces, which have lagged behind other online spheres.
“While child and user safety have been incredibly important pillars in gaming, even before social media, [gaming] “It’s a phase behind social media in its approach to content moderation,” Poran said. “One thing we’re seeing in games, that maybe we saw several years ago on social media, is this move toward proactivity. We are seeing more sophisticated, proactive content moderation tools like ActiveFence and Modulate and many other vendors. “We are seeing more investments from companies in these technologies.”
Until just a few years ago, it was nearly impossible to moderate voice content, Howie added, and even when the technology was developed, it was too expensive to implement. But once Modulate made it affordable, developers suddenly had access to everything that was said in their games.
“What they say is that every company we’ve ever worked with knew they were bad, but I didn’t think they were that bad.” Howie said. “The things that are said online, the damage that can be caused. And now the technology is there. “We can stop it.”
And as the technology becomes more sophisticated, Miller said, developers will be able to fine-tune their moderation strategies, something that has been lacking, he said.
“It’s an area where there are opportunities in the industry, recognizing that there are very different audiences between games,” Miller said. “Our audience is very young. “That has a lot of different implications for what should be marked versus, say, a casino gambling game.”
Security by design
Security by design makes these tools and strategies a priority from the beginning, determining what product features will be necessary and what security barriers should be established, from monitoring to compliance guidelines.
“It’s about asking not only what can go wrong, but also what someone looking to do harm would think of this feature, of this product.” Poran said. “It’s the practice of asking those questions and putting the right mechanisms in place. It looks very different between different products and different games. You wonder, what is unique to our game? What can and will go wrong in this unique environment?
One of the solutions ActiveFence offers as part of security by design is what they call security red teams, where they use their knowledge of the bad actors they monitor around the world to mimic these behaviors. It is used as a tool to safely test a game before its release and discover any features that could be abused.
Implementation of codes of conduct
“Once your community understands that certain behaviors will no longer be tolerated, that certain things can no longer be said, you will be surprised how quickly they will fall into line,” Howie said.
That means engaging your gaming community in moderation efforts to remove trolls who are harming your game and your community.
“We’ve seen statistics on our side where, if you can eliminate the toxicity, we’ve been able to reduce attrition by 15 to 20 percent with new players and with returning players,” he said. “Once they see that they can participate in a game and not be yelled at, not made to feel bad, they stay. They play more. I would venture to guess that they also spend more.”
A code of conduct evolves over time and must adapt to unexpected situations, Miller said. The Schell Games team puts a lot of thought into how they would handle any moderation challenges and protect their users, who are quite young, including cheating, toxic behavior, and child endangerment. They tested tools like ToxMod with their beta channel, but things didn’t progress as planned, he said.
“It was not a good prediction of what was going to happen when the live game happened. “We learned that our beta community was much nicer to each other than when it was in the live game,” he said. “We actually had to revise some of our policies on moderation because we made our first draft based on the beta version. Then when we went live, it was like, wow, this is… you’re right. “It’s worse than you think it will be.”
A code of conduct cannot be developed in a vacuum, Poran agreed.
“It doesn’t make sense to write a 50-page code of conduct and then come out and see that half of it doesn’t apply to the community and that’s not the problem,” he said. “The policy and the code of conduct are evolving, they continually evolve based on feedback from the community, from their agents, from their moderators coming back and saying, hey, this is happening frequently, and the code of conduct is not really clear on what I need to do here. So that needs to evolve. But it is something that has to be done intentionally and constantly.”
“Things like radicalization, recruitment, and then there’s always racism and misogyny; it all comes together at the same time,” Howie said. “The technology is there to deal with it. “It’s a great time to take a look at this area and say, we just need to clean it up and make it better for our players.”