The ADL to eavesdrop on Call of Duty gamers

Started by yankeedoodle, September 04, 2023, 06:37:02 PM

Previous topic - Next topic

yankeedoodle

From Information Liberation
Call of Duty Eavesdrops on In-Game Voice Chat With ADL-Trained AI to Help Ban Players for 'Toxic Speech'
https://www.informationliberation.com/?id=63946

Call of Duty has begun eavesdropping on in-game voice chat using AI trained by the Anti-Defamation League to help ban gamers for using "toxic speech," "hate speech," "discriminatory language, harassment and more."

https://twitter.com/CallofDuty/status/1696916952210378946?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1696916952210378946%7Ctwgr%5E3758bfa0ae29a950af5e86038b68624b46477b0f%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.informationliberation.com%2F%3Fid%3D63946

From PC Gamer, "Call of Duty enlists AI to eavesdrop on voice chat and help ban toxic players starting today":  https://www.pcgamer.com/call-of-duty-enlists-ai-to-eavesdrop-on-voice-chat-and-help-ban-toxic-players-starting-today/
QuoteActivision announced a partnership with AI outfit Modulate to integrate its proprietary voice moderation tool—ToxMod—into Modern Warfare 2, Warzone 2, and the upcoming Modern Warfare 3.

Activision says ToxMod, which begins beta testing in North American servers today, is able to "identify in real-time and enforce against toxic speech—including hate speech, discriminatory language, harassment and more."

[...] Call of Duty's ToxMod AI will not have free rein to issue player bans. A voice chat moderation Q&A published today specifies that the AI's only job is to observe and report, not punish.

"Call of Duty's Voice Chat Moderation system only submits reports about toxic behavior, categorized by its type of behavior and a rated level of severity based on an evolving model," the answer reads. "Activision determines how it will enforce voice chat moderation violations."

So while voice chat complaints against you will, in theory, be judged by a human before any action is taken, ToxMod looks at more than just keywords when flagging potential offenses. Modulate says its tool is unique for its ability to analyze tone and intent in speech to determine what is and isn't toxic. If you're naturally curious how that's achieved, you won't find a crystal-clear answer but you will find a lot of impressive-sounding claims (as we're used to from AI companies).

The company says its language model has put in the hours listening to speech from people with a variety of backgrounds and can accurately distinguish between malice and friendly riffing. Interestingly, Modulate's ethics policy states ToxMod "does not detect or identify the ethnicity of individual speakers," but it does "listen to conversational cues to determine how others in the conversation are reacting to the use of [certain] terms."

Terms like the n-word: "While the n-word is typically considered a vile slur, many players who identify as black or brown have reclaimed it and use it positively within their communities... If someone says the n-word and clearly offends others in the chat, that will be rated much more severely than what appears to be reclaimed usage that is incorporated naturally into a conversation."

[...]

https://www.youtube.com/watch?v=NudPZOafB14

In recent months, ToxMod's flagging categories have gotten even more granular. In June, Modulate introduced a "violent radicalization" category to its voice chat moderation that can flag "terms and phrases relating to white supremacist groups, radicalization, and extremism—in real-time."

The list of what ToxMod claims to be detecting here includes:
- Promotion or sharing ideology
- Recruitment or convincing others to join a group or movement
- Targeted grooming or convincing vulnerable individuals (ie, children and teens) to join a group or movement
- Planning violent actions or actively planning to commit physical violence

"Using research from groups like ADL, studies like the one conducted by NYU, current thought leadership, and conversations with folks in the gaming industry," says the company, "we've developed the category to identify signals that have a high correlation with extremist movements, even if the language itself isn't violent. (For example, 'let's take this to Discord' could be innocent, or it could be a recruiting tactic.)"

[...] ToxMod will roll out worldwide in Call of Duty at the launch of Modern Warfare 3 on November 10, starting with English-only moderation and expanding to more languages at a later date.

How nice to have the ADL -- which has been teaching schoolchildren for years that only white people can be racist -- is helping train AI to ban gamers for their speech.  https://www.informationliberation.com/?id=62856



The ADL threatened Steam three years ago to get them to censor gamers' speech.  https://www.informationliberation.com/informationliberation.com/?id=61421

https://twitter.com/KeithWoodsYT/status/1697427439775850880?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1697427439775850880%7Ctwgr%5E3758bfa0ae29a950af5e86038b68624b46477b0f%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.informationliberation.com%2F%3Fid%3D63946

Earlier this year, Call of Duty was engulfed in controversy after they "canceled" their top streamer for saying pro-LGBT activists should "leave little children alone."

https://twitter.com/stclairashley/status/1667006629814861824?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1667006629814861824%7Ctwgr%5E3758bfa0ae29a950af5e86038b68624b46477b0f%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.informationliberation.com%2F%3Fid%3D63946

According to CoD and the ADL's standards, pro-LGBT activists must be allowed to groom children to become transgender but gamers must be banned for saying "it's okay to be white."