Call of Duty uses AI to detect 2 million toxic voice chats

Suswati Basu | Readwrite

Activision, the developer behind the Call of Duty franchise, implemented an AI voice moderation software that detected over two million toxic chats since its introduction in August 2023. This moderation system, which targets hate speech and harassment, has been applied globally to Call of Duty: Modern Warfare II, Modern Warfare III, and Warzone, with support for English, Spanish, and Portuguese. The system has led to a significant reduction in repeat offences and exposure to severe online abuse, and Activision plans to expand it to include more languages and combat in-game toxicity more effectively.

Read More

Previous
Previous

Fake explicit Taylor Swift images show victims bear the cost of big tech’s indifference to abuse

Next
Next

Fortnite’s Creative Islands have a serious content moderation issue