Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Cybercrime

Defeating the Deepfake Danger

Deepfakes are becoming increasingly popular with cybercriminals, and as these technologies become even easier to use, organizations must become even more vigilant.

As deepfakes quickly advance in terms of sophistication, they can be scarily convincing, as we’ve seen in some examples. And what’s more, they’re becoming increasingly popular with cybercriminals, as these technologies become even easier to use. The introduction of VALL-E, for instance, has raised new concerns about the ability to make deepfake voices quick and easy – in other words, quickfakes.

As these technologies become more accessible and easier use, it’s likely to open new opportunities for bad actors with limited resources and technical chops to use them for new forms of cyber-attack and fraud.

The next generation of attacks – weaponizing AI

Deepfakes are part of the ongoing trend of weaponized AI. They’re extremely effective in the context of social engineering because they use AI to mimic human communications so well. With tools like these, malicious actors can easily hoodwink people into giving them credentials or other sensitive information, or even transfer money for instant financial gain.

Deepfakes represent the next generation of fraud, by enabling bad actors to impersonate people more accurately and thus trick employees, friends, customers, etc., into doing things like turning over sensitive credentials or wiring money.

Here’s one real-world example: Bad actors used deepfake voice technology to defraud a company by using AI to mimic the voice of a CEO to persuade an employee to transfer nearly $250,000 to a Hungarian supplier. Earlier this year, the FBI also warned of an uptick in the use of deepfakes and stolen PII to apply for remote work jobs – especially for positions with access to a lot of sensitive customer data.

A lowered barrier to entry

Advertisement. Scroll to continue reading.

The commoditization of cutting-edge tools and AI has lowered the barrier to producing these deepfakes. In addition to technological advancements, turnkey, subscription-based “deepfake-as-a-service” is on the horizon, enabling cybercriminals of all skill levels to launch more sophisticated attacks without first devoting time and resources to creating a custom attack strategy. Cybercriminals could then employ deepfakes in an infinite number of ways, and as these technologies become more widely accessible in the future, the threat will only increase.

Protecting your environment in the era of deepfakes

To fight the onslaught of deepfakes and prevent these videos and images from being shared online, steps are being taken on many fronts. In the social media realm, Facebook is working with university researchers to create a deepfake detector to help enforce its ban. Twitter has a similar ban in place, and YouTube is working to block any deepfakes it finds related to politics.

Data science solutions to detect deepfakes have been under development by researchers, but many of the solutions have become useless as the attackers’ technology advances and produces more convincing outcomes. Filtering programs from AI firms act like spam filters and tag deepfakes before they can cause harm. And on the legislative front, several states have passed laws to make deepfake pornography a crime and prevent deepfake use during election cycles.

Antivirus software, web filtering and endpoint detection and response (EDR) technologies can all help to safeguard an enterprise from the weaponization of AI. However, raising cybersecurity awareness through education is one of the best defensive strategies for stopping attacks related to AI. While many businesses provide staff with basic security training, businesses should think about introducing additional modules that teach employees how to recognize AI-focused risks.

For instance, a module on deepfakes could offer tips on spotting deepfake videos. Tip-offs include awkward head and body positioning, eyes that don’t blinking, unnatural facial expressions and facial morphing. Other tell-tale signs are an unnatural body shape, unrealistic hair (too perfect or not matching the normal hairline), abnormal skin colors, inconsistent or awkward-looking head and body positioning, odd lighting or discoloration, and bad lip-syncing.

Defeating deepfakes

Deepfakes are becoming increasingly popular with cybercriminals, and as these technologies become even easier to use, organizations must become even more vigilant. This is all part of what we see as the ongoing trend of weaponized AI. Deep fakes can be incredibly effective at social engineering, which is already an effective means of attack.

Legislators, social media giants and researchers are all working on ways to defeat this insidious new threat. There are some security technologies that organizations can deploy, and they will help to a degree. But as with most security issues, humans are often the first and best line of defense. At the risk of sounding like a broken record, it always comes down to cyber hygiene and training. Employees must receive training so they can spot these dangerous deepfakes and spare their organizations the loss of money and reputation. In addition, keep in mind that identity verification remains important. Just like phishing emails, picking up the phone and calling someone after an instruction to do something possibly questionable goes a long way.

Related: Deepfakes – Significant or Hyped Threat?

RelatedDeepfakes Are a Growing Threat to Cybersecurity and Society: Europol

Related: Coming to a Conference Room Near You: Deepfakes

RelatedMisinformation Woes Could Multiply With ‘Deepfake’ Videos

RelatedThe Growing Threat of Deepfake Videos

Written By

Derek Manky is chief security strategist and global vice president of threat intelligence at FortiGuard Labs. Derek formulates security strategy with more than 15 years of cyber security experience behind him. His ultimate goal to make a positive impact in the global war on cybercrime. He provides thought leadership to industry, and has presented research and strategy worldwide at premier security conferences. As a cybersecurity expert, his work includes meetings with leading political figures and key policy stakeholders, including law enforcement. He is actively involved with several global threat intelligence initiatives including NATO NICP, INTERPOL Expert Working Group, the Cyber Threat Alliance (CTA) working committee and FIRST – all in effort to shape the future of actionable threat intelligence and proactive security strategy.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Cybercrime

A recently disclosed vBulletin vulnerability, which had a zero-day status for roughly two days last week, was exploited in a hacker attack targeting the...

Cybercrime

The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions.

Cybercrime

As it evolves, web3 will contain and increase all the security issues of web2 – and perhaps add a few more.

Cybercrime

Luxury retailer Neiman Marcus Group informed some customers last week that their online accounts had been breached by hackers.

Cybercrime

Zendesk is informing customers about a data breach that started with an SMS phishing campaign targeting the company’s employees.

Cybercrime

Patch Tuesday: Microsoft calls attention to a series of zero-day remote code execution attacks hitting its Office productivity suite.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Cybercrime

Satellite TV giant Dish Network confirmed that a recent outage was the result of a cyberattack and admitted that data was stolen.