Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Cybercrime

Deepfakes Are a Growing Threat to Cybersecurity and Society: Europol

Deepfakes, left unchecked, are set to become the cybercriminals’ next big weapon

Deepfakes, left unchecked, are set to become the cybercriminals’ next big weapon

Deepfake technology uses artificial intelligence techniques to alter existing or create new audio or audio-visual content. It has some non-malign purposes — such as satire and gaming — but is increasingly used by bad actors for bad purposes. And yet, in 2019, research from iProove showed that 72% of people were still unaware of deepfakes.

Deepfakes are used to create a false narrative apparently originating from trusted sources. The two primary threats are against civil society (spreading disinformation to manipulate opinion towards a desired effect, such as a particular election outcome); and against individuals or companies to obtain a financial return. The threat to civil society is that, left unchecked, entire populations could have their views and opinions swayed by deepfake-delivered disinformation campaigns distorting the truth of events. People will no longer be able to determine truth from falsehood.

The cybersecurity threat to companies is that deepfakes could increase the effectiveness of phishing and BEC attacks, make identity fraud easier, and manipulate company reputations to cause an unjustified collapse in share value. 

Deepfake technology

A deepfake is developed by using a neural network to examine and discover the patterns necessary to produce a convincing picture, and to develop a machine learning algorithm from this. As with all machine learning, the quantity of data that can be used for training is critical – the larger the dataset, the more accurate the algorithm. Large training datasets are already freely available on the internet.

Two current developments have improved and increased the quality and threat from deepfakes. The first is the adaptation and use of generative adversarial networks (GANs). A GAN operates with two models: generative and discriminating. The discriminating model repeatedly tests the generative model against the original dataset. “With the results from these tests,” writes Europol (Law enforcement and the challenge of deepfakesPDF), “the models continuously improve until the generated content is just as likely to come from the generative model as the training data.” The result is a false image that cannot be detected by the human eye but is under the control of an attacker.

The second threat comes from 5G bandwidth and the compute power of the cloud, allowing video streams to be manipulated in real time. “Deepfake technologies can therefore be applied in videoconferencing settings, live-streaming video services and television,” writes Europol.

Advertisement. Scroll to continue reading.

Cybersecurity threats

Few criminals have the necessary expertise to develop and use compelling deepfakes – but this is unlikely to delay their use. The continuing evolution and development of crime-as-a-service (CaaS) “is expected to evolve in parallel with current technologies, resulting in the automation of crimes such as hacking and adversarial machine learning and deepfakes,” says Europol.

Deepfake threats fall into four main categories: societal (stoking social unrest and political polarization); legal (falsifying electronic evidence); personal (harassment and bullying, non-consensual pornography and online child exploitation); and traditional cybersecurity (extortion and fraud and manipulating financial markets).

Forged passports with a deepfake photograph will be difficult to detect. These could then be used to facilitate multiple other crimes, from identity theft and trafficking to illegal immigration and terrorist travel.

Deepfakes of embarrassing or illegal activity could be used for extortion. Phishing could move to a new level if the lure includes video or voice of a trusted friend. BEC attacks could be supported by a video message and voice identical to the genuine CEO. But the really serious threat could come from market manipulation.

VMware’s Tom Kellermann recently told SecurityWeek that market manipulation already exceeds the value of ransomware to the criminals. This is currently achieved through the use of stolen information that allows the criminal to benefit from what is essentially insider trading. However, the use of deepfakes could give the criminals a more direct approach. False information, embarrassing revelations, accusations of illegal exports and much more could cause a dramatic collapse in the share value of a company. Criminal gangs with deep pockets, or even rogue nation states seeking to offset sanctions, could buy the shares when low, and make a massive ‘killing’ when the value inevitably rises again.

Security is based on trust. Deepfakes provide trust where none should exist.

Detection of deepfakes

The quality of deepfakes already exceeds the ability of the human eye to detect a forgery. A limited solution uses the principle of provenance on original source material – but this will benefit law enforcement’s need to keep deepfakes out of criminal evidence proceedings more than it will prevent deepfake cybercrime.

Technology is another potential method. Examples include biological signals based on imperfections in the natural changes of skin tone caused by blood flow; phoneme-viseme mismatches (that is, an imperfect correlation between word and mouth correspondence); facial movements (where facial and head movements don’t correctly correlate); and recurrent convolutional models which look for inconsistencies between the individual frames that comprise a video. 

But there are difficulties. Just like a slight variation to malware may be enough to fool malware signature detection engines, so a slight alteration to the method used to generate a deepfake might also fool existing detection. This could be simply updating the discriminative model within the GAN used to produce the deepfake.

A further problem could be caused by compressing the deepfake video, which would reduce the number of pixels available to the detection algorithm.

Europol recommends that avoiding deepfakes may be more effective than trying to detect them. The first recommendation is to rely on audio-visual authorization rather than just audio. This may be a short-term solution until the deepfake technology, cloud compute power and 5G bandwidth make it ineffective. These developments will also negate the second recommendation: to demand live video connection.

The final recommendation is a form of captcha; that is, says Europol, “Requiring random complicated acts to be performed live in front of the camera, e.g. move hands across the face.”

The way forward

The simple reality is that deepfake production technology is currently improving faster than deepfake detection technology. The threat is to both society and corporations.

For society, Europol warns, “Experts fear this may lead to a situation where citizens no longer have a shared reality, or could create societal confusion about which information sources are reliable; a situation sometimes referred to as ‘information apocalypse’ or ‘reality apathy’.”

Corporations are in a slightly stronger position since they can include context in any decision on whether to accept or reject an audio/visual approach. They could also insist on machine-to-machine communications rather than person-to-person, using zero-trust principles to verify the machine owner rather than the communication.

Where it becomes particularly difficult, however, is when deepfakes are used against society (or at least the stock-holding part of society) to manipulate a crash in share value for the corporation. “This process,” warns Europol, “is further complicated by the human predisposition to believe audio-visual content and work from a truth default perspective.” The public is not likely to immediately believe the corporation’s insistence that it is all just fake news – at least not in time to prevent the share crash.

Deepfakes are already a problem, but likely to become an even greater problem over the next couple of years.

Related: Becoming Elon Musk – the Danger of Artificial Intelligence

Related: The Art Exhibition That Fools Facial Recognition Systems

Related: Cyber Insights 2022: Adversarial AI

Related: Misinformation Woes Could Multiply With ‘Deepfake’ Videos

Related: Coming to a Conference Room Near You: Deepfakes

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Understand how to go beyond effectively communicating new security strategies and recommendations.

Register

Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.

Register

Expert Insights

Related Content

Cybercrime

The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions.

Data Protection

The cryptopocalypse is the point at which quantum computing becomes powerful enough to use Shor’s algorithm to crack PKI encryption.

Cybercrime

As it evolves, web3 will contain and increase all the security issues of web2 – and perhaps add a few more.

Cybercrime

A recently disclosed vBulletin vulnerability, which had a zero-day status for roughly two days last week, was exploited in a hacker attack targeting the...

Cybercrime

Luxury retailer Neiman Marcus Group informed some customers last week that their online accounts had been breached by hackers.

Cybercrime

Zendesk is informing customers about a data breach that started with an SMS phishing campaign targeting the company’s employees.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Cybercrime

Satellite TV giant Dish Network confirmed that a recent outage was the result of a cyberattack and admitted that data was stolen.