Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Compliance

Facebook Claims 99% of Extremist Content Removed Without Users’ Help

Facebook claims growing success in fight against extremist content

Facebook claims growing success in fight against extremist content

At this week’s International Homeland Security Forum (IHSF) hosted in Jerusalem by Israel’s minister of public security, Gilad Erdan, Facebook claimed growing success in its battle to remove extremist content from the network.

Dr. Erin Marie Saltman, Facebook counterterrorism policy lead for EMEA, said, “On Terrorism content, 99% of terrorist content from ISIS and al-Qaida we take down ourselves, without a single user flagging it to us. In the first quarter of 2018 we took down 1.9 million pieces of this type of terrorist content.”

This was achieved by a combination of Facebook staff and machine learning algorithms. “Focusing our machine learning tools on the most egregious terrorist content we are able to speak to scale and speed of efforts much more openly. But human review and operations is also always needed.”

However, the implication that Facebook is winning the war against extremism is countered by a report (‘Spiders of the Caliphate: Mapping the Islamic Stateís Global Support Network on Facebook’ PDF) published in May 2018 by the Counter Extremism Project (CEP).

CEP was launched in 2014 by former U.S. government officials, including former Homeland Security adviser Frances Townsend, former Connecticut Senator Joseph Lieberman, and Mark Wallace, a former U.S. Ambassador to the United Nations.

Its report mapped 1,000 Facebook profiles explicitly supporting IS between October 2017 and March 2018. Using the open source network analysis and visualization program, Gephi, it found that visible ‘friends’ expanded the 1,000 nodes with 5,347 edges. Facebook’s friending mechanism is particularly criticized as a means by which IS accounts find new targets to recruit.

The report actually refers to the 99% claim, implying that Saltman’s claim is not a new development superseding the findings of CEP: “Given ISís ongoing presence on the platform, it is clear that Facebookís current content moderation systems are inadequate, contrary to the companyís public statements. Facebook has said that they remove 99% of IS and Al Qaeda content using automated systems…”

In fact, CEP fears that Facebook relies too heavily on its algorithms for finding and removing terrorist content. “This reliance on automated systems means IS supportersí profiles often go unremoved by Facebook and can remain on the platform for extended periods of time.” It gives the example of a video from the IS Amaq news agency that was posted in September 2016 and remained available when the report was written in April 2018.

Advertisement. Scroll to continue reading.

“The video depicts combat footage from the Battle of Mosul and shows how IS produced a variety of weapon systems including car bombs and rocket launchers,” notes the report.

Another example describes an ISIS supporter friending a non-Muslim and then gradually radicalizing him during the six-month period. “ID 551 played a clear role in radicalizing ID 548 and recruiting him as an IS supporter,” says the report. “Facebook was the platform that facilitated the process, and it also functioned as an IS news source for him. Furthermore, given his connections with existing IS networks on Facebook, the moment that ID 548 wishes to become more than an online supporter he has the necessary contacts available to him. These are individuals who can assist with traveling to fight for the group or staging an attack in America. This case provides a detailed insight into the scope to which IS has taken advantage of Facebookís half-measures to combating extremism online.”

This is not a simple problem. Taking down suspect terrorist content that is posted and used legitimately is a direct infringement of U.S. users’ First Amendment rights. Dr Saltman described this issue at the IHSF conference. “We see,” she said, “that pieces of terrorist content and imagery are used by legitimate voices as well; activists and civil society voices who share the content to condemn it, mainstream media using imagery to discuss it within news segments; so, we need specialized operations teams with local language knowledge to understand the nuance of how some of this content is shared.”

To help avoid freedom of speech issues, Facebook has made its enforcement process more transparent. “I am pleased to say,” said Saltman, “that just last month we made the choice to proactively be more transparent about our policies, releasing much more information about how we define our global policies through our Comprehensive Community Standards. These standards cover everything from keeping safe online to how we define dangerous organizations and terrorism.”

At the same time, appealing removal decisions is made easier and adjudicated by a human. This can be problematic. According to a January 2018 report in the Telegraph, an IS supporter in the UK who shared large amounts of IS propaganda had his account reactivated nine times by Facebook after he complained to the moderators that Facebook was stifling his free speech.

Where clearly illegal material is visible, Facebook cooperates proactively with law enforcement. Waheba Issa Dais, a Wisconsin 45-year-old mother of two, is in federal custody after being charged on Wednesday this week with providing ‘material support or resources to a foreign terrorist organization.’

The Milwaukee Journal Sentinel reports, “The investigation appears to have started in January after Facebook security told the FBI that there was a ‘Wisconsin-based user posting detailed instructions on how to make explosive vest bombs in support of ISIS,’ the affidavit states. The person behind the Facebook posts, who the FBI said they determined was Dais, ‘also appeared to be engaged in detailed question and answer sessions discussing substances used to make bombs’.”

Ricin is mentioned. It would be easy enough for a word like ‘ricin’ to activate an alert. It is in the less obvious extremist content that machine learning algorithms need to be used. But machine learning is still a technology with great promise and partial delivery. “The real message is that Facebook has made it more difficult for ISIS and Al-Qaida to use their platform for recruiting,” Ron Gula, president and co-founder of Gula Tech Adventures told SecurityWeek.

“Machine learning is great at recognizing patterns. Unfortunately, if the terrorists change their content and recruiting methods, they may still be able to leverage Facebook. This type of detection could turn into a cat and mouse game where terror organizations continuously change their tactics, causing Facebook to constantly have to update their rules and intelligence about what should be filtered.”

The extremists won’t make it easy. “They have become very goo
d at putting a reasonable ‘face’ on much of their online recruiting material,” explains John Dickson, Principal at the Denim Group. “Once they have someone interested is when they fully expose their intent. Given this situation, I’m not sure how [the algorithms] don’t create a ton of false positives and start taking down legitimate Islamic content.”

Nearly every security activity creates false positives. “I suspect this will be no different,” he continued. “Machine learning or more specifically supervised learning likely will help aid security analysts attempting to distinguish between legitimate jihadist recruiting material and generic Islamic content. But it will still need a human to make the final decisions – and that human is likely to be biased by the American attitude towards freedom of speech.”

In the final analysis, Facebook is caught between competing demands: a very successful business model built on making ‘friending’ and posting easy, the First Amendment protecting free speech; and moral and legal demands to find and exclude disguised extremist needles hidden in a very large haystack of 2.2 billion active Facebook users every month.

Related: Online – The Other Side of Terrorism 

Related: Should Social Media be Considered Part of Critical Infrastructure? 

Related: Twitter Suspends Over 100K Accounts Related to Terrorism 

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

Attack detection firm Vectra AI has appointed Jeff Reed to the newly created role of Chief Product Officer.

More People On The Move

Expert Insights

Related Content

Application Security

Cycode, a startup that provides solutions for protecting software source code, emerged from stealth mode on Tuesday with $4.6 million in seed funding.

Cybercrime

A recently disclosed vBulletin vulnerability, which had a zero-day status for roughly two days last week, was exploited in a hacker attack targeting the...

Cybercrime

The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions.

CISO Strategy

SecurityWeek spoke with more than 300 cybersecurity experts to see what is bubbling beneath the surface, and examine how those evolving threats will present...

CISO Conversations

Joanna Burkey, CISO at HP, and Kevin Cross, CISO at Dell, discuss how the role of a CISO is different for a multinational corporation...

CISO Conversations

In this issue of CISO Conversations we talk to two CISOs about solving the CISO/CIO conflict by combining the roles under one person.

CISO Strategy

Security professionals understand the need for resilience in their company’s security posture, but often fail to build their own psychological resilience to stress.