Cybercrime

Defeating the Deepfake Danger

Deepfakes are becoming increasingly popular with cybercriminals, and as these technologies become even easier to use, organizations must become even more vigilant.

Deepfakes are becoming increasingly popular with cybercriminals, and as these technologies become even easier to use, organizations must become even more vigilant.

As deepfakes quickly advance in terms of sophistication, they can be scarily convincing, as we’ve seen in some examples. And what’s more, they’re becoming increasingly popular with cybercriminals, as these technologies become even easier to use. The introduction of VALL-E, for instance, has raised new concerns about the ability to make deepfake voices quick and easy – in other words, quickfakes.

As these technologies become more accessible and easier use, it’s likely to open new opportunities for bad actors with limited resources and technical chops to use them for new forms of cyber-attack and fraud.

The next generation of attacks – weaponizing AI

Deepfakes are part of the ongoing trend of weaponized AI. They’re extremely effective in the context of social engineering because they use AI to mimic human communications so well. With tools like these, malicious actors can easily hoodwink people into giving them credentials or other sensitive information, or even transfer money for instant financial gain.

Deepfakes represent the next generation of fraud, by enabling bad actors to impersonate people more accurately and thus trick employees, friends, customers, etc., into doing things like turning over sensitive credentials or wiring money.

Here’s one real-world example: Bad actors used deepfake voice technology to defraud a company by using AI to mimic the voice of a CEO to persuade an employee to transfer nearly $250,000 to a Hungarian supplier. Earlier this year, the FBI also warned of an uptick in the use of deepfakes and stolen PII to apply for remote work jobs – especially for positions with access to a lot of sensitive customer data.

A lowered barrier to entry

The commoditization of cutting-edge tools and AI has lowered the barrier to producing these deepfakes. In addition to technological advancements, turnkey, subscription-based “deepfake-as-a-service” is on the horizon, enabling cybercriminals of all skill levels to launch more sophisticated attacks without first devoting time and resources to creating a custom attack strategy. Cybercriminals could then employ deepfakes in an infinite number of ways, and as these technologies become more widely accessible in the future, the threat will only increase.

Advertisement. Scroll to continue reading.

Protecting your environment in the era of deepfakes

To fight the onslaught of deepfakes and prevent these videos and images from being shared online, steps are being taken on many fronts. In the social media realm, Facebook is working with university researchers to create a deepfake detector to help enforce its ban. Twitter has a similar ban in place, and YouTube is working to block any deepfakes it finds related to politics.

Data science solutions to detect deepfakes have been under development by researchers, but many of the solutions have become useless as the attackers’ technology advances and produces more convincing outcomes. Filtering programs from AI firms act like spam filters and tag deepfakes before they can cause harm. And on the legislative front, several states have passed laws to make deepfake pornography a crime and prevent deepfake use during election cycles.

Antivirus software, web filtering and endpoint detection and response (EDR) technologies can all help to safeguard an enterprise from the weaponization of AI. However, raising cybersecurity awareness through education is one of the best defensive strategies for stopping attacks related to AI. While many businesses provide staff with basic security training, businesses should think about introducing additional modules that teach employees how to recognize AI-focused risks.

For instance, a module on deepfakes could offer tips on spotting deepfake videos. Tip-offs include awkward head and body positioning, eyes that don’t blinking, unnatural facial expressions and facial morphing. Other tell-tale signs are an unnatural body shape, unrealistic hair (too perfect or not matching the normal hairline), abnormal skin colors, inconsistent or awkward-looking head and body positioning, odd lighting or discoloration, and bad lip-syncing.

Defeating deepfakes

Deepfakes are becoming increasingly popular with cybercriminals, and as these technologies become even easier to use, organizations must become even more vigilant. This is all part of what we see as the ongoing trend of weaponized AI. Deep fakes can be incredibly effective at social engineering, which is already an effective means of attack.

Legislators, social media giants and researchers are all working on ways to defeat this insidious new threat. There are some security technologies that organizations can deploy, and they will help to a degree. But as with most security issues, humans are often the first and best line of defense. At the risk of sounding like a broken record, it always comes down to cyber hygiene and training. Employees must receive training so they can spot these dangerous deepfakes and spare their organizations the loss of money and reputation. In addition, keep in mind that identity verification remains important. Just like phishing emails, picking up the phone and calling someone after an instruction to do something possibly questionable goes a long way.

Related: Deepfakes – Significant or Hyped Threat?

RelatedDeepfakes Are a Growing Threat to Cybersecurity and Society: Europol

Related: Coming to a Conference Room Near You: Deepfakes

RelatedMisinformation Woes Could Multiply With ‘Deepfake’ Videos

RelatedThe Growing Threat of Deepfake Videos

Related Content

Artificial Intelligence

Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a framework for how they...

Cybercrime

SecurityWeek spoke to Nasir Memon, an IEEE Fellow and NYU professor to understand the current state and future significance of deepfakes.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version