Gaming giant Activision uses AI to eavesdrop on players for ‘toxicity’
Posted by freedomforall 1 year, 1 month ago to Technology
Excerpt:
"Activision last week began listening in to gamer chatter using an AI program which scans conversations for “toxicity.”
The gaming giant has partnered with tech firm Modulate to develop ToxMod, a program which searches both in-game text and voice chats for “hate speech, discriminatory language, sexism, harassment, and more.” As of Wednesday, ToxMod now eavesdrops on North American players of Call of Duty: Warzone™ and Call of Duty: Modern Warfare II. In November, the program will be rolled out for all Call of Duty: Modern Warfare III players worldwide, excluding Asia.
Over one million gaming accounts have so far had their chats restricted by Call of Duty’s “anti-toxicity team,” the company boasted in a blog post Wednesday. Offenders first receive a warning and then penalties if they re-offend.
...
Tech corporations are making increasing use of AI technology to censor content, a practice they call “content moderation.”
Last month OpenAI, the company behind popular chatbot ChatGPT, published guidance on how to use artificial intelligence to streamline censorship. Its proposed method involves feeding the AI program a policy outlining which content should be suppressed. The program is then tested with examples and the prompts are adjusted as necessary.
Although OpenAI’s method would no longer require human “moderators,” the censorship guidelines would still be created by human censors, or “policy experts.” Examples include a user asking the program where to buy ammunition or how to make a machete."
"Activision last week began listening in to gamer chatter using an AI program which scans conversations for “toxicity.”
The gaming giant has partnered with tech firm Modulate to develop ToxMod, a program which searches both in-game text and voice chats for “hate speech, discriminatory language, sexism, harassment, and more.” As of Wednesday, ToxMod now eavesdrops on North American players of Call of Duty: Warzone™ and Call of Duty: Modern Warfare II. In November, the program will be rolled out for all Call of Duty: Modern Warfare III players worldwide, excluding Asia.
Over one million gaming accounts have so far had their chats restricted by Call of Duty’s “anti-toxicity team,” the company boasted in a blog post Wednesday. Offenders first receive a warning and then penalties if they re-offend.
...
Tech corporations are making increasing use of AI technology to censor content, a practice they call “content moderation.”
Last month OpenAI, the company behind popular chatbot ChatGPT, published guidance on how to use artificial intelligence to streamline censorship. Its proposed method involves feeding the AI program a policy outlining which content should be suppressed. The program is then tested with examples and the prompts are adjusted as necessary.
Although OpenAI’s method would no longer require human “moderators,” the censorship guidelines would still be created by human censors, or “policy experts.” Examples include a user asking the program where to buy ammunition or how to make a machete."
I remember just a few years ago somebody in a regional chat on World of Warcraft trying to bait Christians into an argument. They kept emphasizing how “gay” they were and God was wrong and this and that and blah blah blah. Didn’t shut up about it for about ten minutes. Finally I was the only one that responded. My response was….”OK we get it..you’re gay. Nobody cares. Now shut up and come down off your cross. We need the wood.”
I got no response back. But they did finally shut up.