Connect with us

Hi, what are you looking for?


Threat Intelligence

Meta Takes Action Against Multiple Foreign Influence Campaigns

Meta removed three foreign influence operations from the Facebook platform during Q3, 2023. Two were Chinese in origin, and one was Russian, the company says. 

Disinformation campaigns on Facebook removed by Meta

Social media giant Meta removed three foreign influence operations from the Facebook platform during Q3, 2023. It designates such operations as coordinated inauthentic behavior (CIB). Two were Chinese in origin, and one was Russian, the company says. 

In each case, the purpose of the CIB was to influence public opinion by spreading false and/or misleading information. Overall, Russia, Iran, and China are the most prolific sources of foreign influence campaigns.

For one Chinese CIB, Meta removed 13 Facebook accounts and seven Groups. One of these Groups had attracted about 1,400 followers. False personas on both Facebook and Twitter posed as journalists, lawyers, and human rights activists.

Two clusters of fictitious personas targeted Tibet and the Arunachal Pradesh region of India. The Tibet cluster accused the Dalai Lama and his followers of corruption and pedophilia, while the second cluster accused the Indian government of corruption and supporting ethnic violence.

For the second Chinese CIB, Meta removed 4,789 Facebook accounts posing as Americans and attempting to influence US politics and US-China relations. This CIB was removed before it could effectively engage with authentic communities on the Meta platforms.

Both sides of the US political divide were criticized by this CIB. Methods would include copy-pasting from legitimate X posts, re-posting Musk posts, resharing legitimate Facebook posts, and linking to genuine mainstream media articles. The likely purpose was to appear more authentic.

Example of fake account under a foreign influence operation

For the Russian CIB, Meta removed six Facebook accounts, one Page, and three Instagram accounts. This CIB targeted English-speaking audiences, and primarily talked about the war in Ukraine. It was supported by fictitious ‘media’ brands on Telegram, which were in turn promoted by Russian embassies and diplomatic missions on Facebook, X, and YouTube.

The Russian CIB accused Ukraine of war crimes, and Ukraine’s supporters of ‘Russophobia’. But it also made critical comments about transgender and human rights; and criticized Biden and Macron – while praising Russian and criticizing French activity in West Africa.

The group garnered around 1,000 followers on Facebook, and 1,000 followers on Instagram.

Advertisement. Scroll to continue reading.

The primary common factor in these operations is that they seek to influence opinion on current geopolitical situations. Meta expects such activity to increase in 2024 in response to upcoming elections in America and Europe. Russian and Iranian campaigns have been found in previous US election cycles (although they were not considered to be sufficient to affect election results). In 2024, Meta expects the volume of elections-related content, including influence campaigns, to scale dramatically with gen-AI.

Gen-AI is a double-edged sword. While it will increase adversarial activity, it also helps detect such activity. “At this time,” comments Meta, “we have not seen evidence that it will upend our industry’s efforts to counter covert influence operations – and it’s simultaneously helping to detect and stop the spread of potentially harmful content.”

CIBs attempt to maintain or rebuild their networks after removal. Meta calls the latter recidivist behavior. “Some of these networks may attempt to create new off-platform entities,” reports Meta (PDF), “such as websites or social media accounts, as part of their recidivist activity.”

There is also a growing practice of decentralizing online activities to increase resilience to takedowns. Meta gives the Chinese ‘Spamouflage’ operation as an example. “It was seen running on 50+ platforms, and it primarily seeded content on blogging platforms and forums like Medium, Reddit and Quora before sharing links to that content on ours.”

Meta suspects that this decentralization of activity is a response to increasing pressure from the major platforms. Long running misinformation campaigns are becoming harder to maintain and more costly to operate. However, with the operations spreading across multiple platforms, Meta also suggests that the need for information sharing is increasingly important. 

It is critical, says Meta, “to continue threat sharing across our industry and with the public so that all apps – big or small – can benefit from threat research by others in identifying potential adversarial threats.” Information sharing with and from government is also important. In 2020, Meta took down influence operations originating in Russia, Mexico, and Iran following a tip from law enforcement.

Related: WhatsApp Tightens Sharing Limits to Curb Virus Misinformation

Related: Russia’s Disinformation Efforts Hit 39 Countries: Researchers

Related: US Seizes Domain Names Used by Iran for Disinformation

Related: US Takes Down Iran-linked News Sites, Alleges Disinformation

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.


Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Learn about active threats targeting common cloud deployments and what security teams can do to mitigate them.


Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.


Expert Insights

Related Content


The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions.

Data Protection

The cryptopocalypse is the point at which quantum computing becomes powerful enough to use Shor’s algorithm to crack PKI encryption.


As it evolves, web3 will contain and increase all the security issues of web2 – and perhaps add a few more.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Incident Response

Meta has developed a ten-phase cyber kill chain model that it believes will be more inclusive and more effective than the existing range of...

Threat Intelligence

How threat intelligence is critical when justifying budget for GRC personnel, and for threat intelligence, incident response, security operations and CISO buyers.


Deepfakes, left unchecked, are set to become the cybercriminals’ next big weapon

Threat Intelligence

A new research report discusses the five most exploited vulnerabilities of 2022, and the five key risks that security teams should consider.