Now on Demand: Zero Trust Strategies Summit - Access All Sessions
Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

The AI Threat: Deepfake or Deep Fake? Unraveling the True Security Risks

When it comes to adversarial use of AI, the real question is whether the AI threat is a deep fake, or whether the deepfake is the AI threat.

Deepfake AI Threat

The big unknown in the security landscape is the adversarial use of AI. It has the potential to upend the current status quo, giving the attackers a new advantage. What we still don’t know is if, when or how this will happen.

The real question is whether the AI threat is a deep fake, or whether the deepfake is the AI threat.

Understanding AI

Just as we cannot defend where we cannot see, so we cannot defend against what we do not understand. There is still much confusion about the current state of AI. 

BlackBerry has sought to clarify this with what it describes as an educational whitepaper (PDF) on AI and its most visible threat – the deepfake.

The value of the BlackBerry paper for anyone needing a primer on AI and deepfakes is clear. It explains, for example, the difference between ML (often used within security products) and the LLM’s wider usage and potential. It is the availability of LLMs – typified but not limited to OpenAI’s ChatGPT – and their huge and still expanding capabilities, that makes it almost inevitable that they will be harnessed for malicious purposes.

BlackBerry states: “One of the most surprising aspects of generative AI models is their versatility. When fed enough high-quality training data, mature models can produce output that approximates human creativity. For example, they can write poems or plays, discuss legal cases, brainstorm ideas on a given theme, write computer code or music, pen a graduate thesis, produce lesson plans for a high-school class, and so on. The only current limit to their use is (ironically) human imagination.”

That versatility now extends beyond text. “Midjourney, DALL·E and Stable Diffusion are examples of AI-based text-to-image generators,” continues BlackBerry. “The user can ask the generator to create an image using any number of different parameters used by a traditional artist, such as a particular style, mood or color palette.”

And video. In February 2024, OpenAI produced a sample video (YouTube) from its new Sora text-to-video AI model. The text prompt was 64 words beginning: “A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage…” 

Advertisement. Scroll to continue reading.

New malware is also a potential threat from gen-AI. The BlackBerry paper is somewhat limited in this area, dwelling primarily on the likelihood of automated polymorphism at speed – improving evasion and stealth techniques and defeating signature-based detection. In the future, we can also expect gen-AI to develop new and possibly advanced malware.

There is, however, one characteristic common to all the currently recognized cybersecurity threats we expect from AI: there is nothing new. Phishing, deepfakes, and malware all pre-date the ChatGPT revolution by decades, and we have evolved defenses against them. All that changes is the speed, scale, and quality of the attacks – but that is no small problem.

Understanding the AI threats

The malware threat is perhaps the easiest to understand and – in theory at least – to counter. We must use AI to defeat AI: we must reduce reliance on signature detection; we must detect malware by its behavior; and we must detect other indications of intrusion by anomalies created. We’re already doing this, largely with ML.

The BlackBerry paper uses Cylance AI as an example. (BlackBerry acquired Cylance in 2019 after announcing the intention to do so for $1.4 billion in 2018.) “It has learned how to spot both commodity and unique malware based on its attributes — and shut it down, pre-execution, before it can cause harm.”

AI-produced deepfakes and AI-improved phishing are a bigger problem. Deepfakes come in two varieties: voice and image/video; both of which are now rapidly improving commodity outputs from readily available gen-AI models – and neither of which is easy to detect by either humans or technology. 

Phishing is a problem that remains unsolved; and is likely to worsen with improved sophistication and increased volume on an industrial scale through gen-AI. So far there has been little evidence that criminals are extensively engaged in this area beyond a few isolated examples. Until recently.

Abnormal’s email threat analysis for H1 2024 notes that the combination of phishing-as-a-service kits and gen-AI will lead to an escalation in the volume of attacks. It has already seen a 350% year on year increase in the number of file-sharing attacks.

More recently (August 29, 2024), Netcraft reported on the use of gen-AI to create fraudulent websites, “with a 5.2x increase over a 30-day period starting July 6, and a 2.75x increase in July alone.”

The implication from such reports is that the long-awaited upsurge in gen-AI generated phishing is imminent – the criminals seem to have been learning how best to use this new opportunity.

The real problem will come when criminals also master the ability to mass produce AI-generated deepfake voice, image, and ultimately video, and to combine all into a single deepfake phishing attack type. The financial incentive exists in mass credential phishing, and more targeted BEC and VEC attacks. And then there’s the influence incentive – the ability to damage brands and influence elections. Frankly, we haven’t seen anything yet.

Left to their own devices, more people will be susceptible to deepfake phishing because better text supported by a known voice and possibly an image or video of the known person (friend, relative or boss) will subliminally reduce suspicion.

Mitigating the AI threat

The security industry is not waiting for the dam to break. There have been numerous new startups in 2024 all working on their own solution on how to detect AI and deepfake attacks, while existing firms have refocused on deepfake detection. Pindrop is an example of the latter. In July 2024, it raised $100 million in debt financing primarily to develop additional tools able to detect deepfake voice attacks.

Deepfake voice is the easiest deepfake to produce, the most employed, and the easiest to detect. This is because there are subtle audible clues that a voice is not human generated that can be detected by technology if not by the human ear. The danger exists where that detection technology is not being used.

The same can be said for the current generation of AI-enhanced polymorphic malware detection systems: they can work, but only where they are being used.

Other AI threat detection systems will also work to one degree or another – but only where they are being used. This is the nature of business: companies develop defenses and sell them to other companies. Where they work, they only work for the companies that buy those specific products.

There are many anti-phishing solutions, and many of them are effective – where they are used. But we haven’t eradicated the phishing threat. The primary reason is that phishing is a global, societal threat rather than simply a business threat that can be countered by individual business solutions. Business is now global, even in current times of geopolitical antipathy. Each business will have dozens, maybe hundreds, of supply chain partners. Each business may have defenses, but the supply chain might not. And each business may have remote workers, who for speed and efficiency might occasionally by-pass the company’s defensive procedures.

If we cannot stop phishing today, and we haven’t, there is little chance of stopping the automated deepfake-enhanced mass phishing at scale and on AI steroids that is just around the corner. The answer is not individual solutions sold to individual businesses, but a global, societal solution. It will of necessity be based on the concept of verified trust – trust in the source of the communications we receive.

That will not be easy. It must be conceptual rather than solely technological. We have already made such attempts: zero trust, MFA, and user awareness training among others. None of them have worked – phishing, and credential theft and misuse have not been eliminated. With gen-AI, this threat will only get very much worse, very soon. What is needed is less a detection system – although such will remain important – but a prevention system based on trust. If the trust is not accurately and immediately verified, block the communication. But on a global, societal scale; not a company by company approach, each with a different solution.

Learn More at the AI Risk Summit

Related: The AI Wild West: Unraveling the Security and Privacy Risks of GenAI Apps

Related: Bolster Raises $14 Million for AI-Powered Phishing Protection

Related: Cloudflare Introduces AI Security Solutions

Related: Artificial Arms Race: What Can Automation and AI do to Advance Red Teams

Related: 5 Critical Steps to Prepare for AI-Powered Malware in Your Connected Asset Ecosystem

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join SecurityWeek and Hitachi Vantara for this this webinar to gain valuable insights and actionable steps to enhance your organization's data security and resilience.

Register

Event: ICS Cybersecurity Conference

The leading industrial cybersecurity conference for Operations, Control Systems and IT/OT Security professionals to connect on SCADA, DCS PLC and field controller cybersecurity.

Register

People on the Move

Former Darktrace CEO Poppy Gustafsson has joined the UK government as Minister for Investment.

Nupur Goyal has joined cloud identity security and management solutions provider Saviynt as VP of Product Marketing.

Threat intelligence firm Intel 471 has appointed Mark Huebeler as its COO and CFO.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.