Google is stepping up its efforts to block “extremist and terrorism-related videos” over its platforms, using a combination of technology and human monitors.
The measures announced Sunday come on the heels of similar efforts unveiled by Facebook last week, and follow a call by the Group of Seven leaders last month for the online giants to do more to curb online extremist content.
“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done,” said a blog post by Google general counsel Kent Walker.
Walker said Google would devote more resources to apply artificial intelligence to suppress YouTube videos used in support of extremist actions.
“This can be challenging: a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user,” he said.
“We will now devote more engineering resources to apply our most advanced machine learning research to train new ‘content classifiers’ to help us more quickly identify and remove extremist and terrorism-related content.”
Google acknowledged that technology alone cannot solve the problem, and said that it would “greatly increase the number of independent experts” on the watch for videos that violate its guidelines.
“Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech,” Walker said.
Google plans to add 50 non-government organizations to the 63 it already works with to filter inappropriate content.
“This allows us to benefit from the expertise of specialized organizations working on issues like hate speech, self-harm, and terrorism,” Walker wrote.
“We will also expand our work with counter-extremist groups to help identify content that may be being used to radicalize and recruit extremists.”
A similar initiative was announced last week by Facebook, which earlier this year said it was adding 3,000 staff to track and remove violent video content.
Google’s Walker said the online giant would start taking “a tougher stance on videos that do not clearly violate our policies,” including videos that “contain inflammatory religious or supremacist content.”
He said YouTube would expand its role in counter-radicalization efforts using an approach that “harnesses the power of targeted online advertising” to reach potential recruits for extremist groups and offers “video content that debunks terrorist recruiting messages.”

More from AFP
- Cyberattacks Target Websites of German Airports, Admin
- Meta Slapped With 5.5 Million Euro Fine for EU Data Breach
- International Arrests Over ‘Criminal’ Crypto Exchange
- France Regulator Raps Apple Over App Store Ads
- More Political Storms for TikTok After US Government Ban
- Meta Hit With 390 Million Euro Fine Over EU Data Breaches
- Facebook Agrees to Pay $725 Million to Settle Privacy Suit
- China’s ByteDance Admits Using TikTok Data to Track Journalists
Latest News
- Sentra Raises $30 Million for DSPM Technology
- Cyber Insights 2023: Cyberinsurance
- Cyber Insights 2023: Attack Surface Management
- Cyber Insights 2023: Artificial Intelligence
- Microsoft’s Verified Publisher Status Abused in Email Theft Campaign
- Guardz Emerges From Stealth Mode With $10 Million in Funding
- How the Atomized Network Changed Enterprise Protection
- Critical QNAP Vulnerability Leads to Code Injection
