Deepfakes are a growing threat. They are primarily a social engineering tool. That means they will increasingly be used in phishing attacks, BEC attacks, reputation attacks, and public opinion attacks (such as election meddling). Existing methods in all these areas are already successful; but the arrival of deepfake videos will take them to a different level.
Deepfake videos are videos that use artificial intelligence to map an almost flawless video image of one person (the target, say, a company CFO) to another (the fake, say, an actor). The video image of the target can be constructed from still photos, and is then mapped to the moving image of the fake. Deepfaked audio, similarly constructed via artificial intelligence from existing target recordings. is then added. As a result, it is possible to manufacture a video of almost anyone saying almost anything. Where text based BEC succeeded in stealing $1.3 billion in 2018, the arrival of a video message from the CEO asking for the rapid transfer of funds to a new ‘supplier’ will be even more compelling.
There is evidence suggesting that this has already started. In July, Symantec said that it had seen three cases of audio-only deepfake scams. “In each,” reported Axios, “a company’s ‘CEO’ called a senior financial officer to request an urgent money transfer… Millions of dollars were stolen from each company, whose names were not revealed.”
“Deepfakes represent real risks,” blogged Matt Price, principal research engineer at ZeroFOX. “The consequences of a fake merger announcement, an ethical transgression, a racist expression, etc. can virally explode before the falsehood is identified. One incident can harm a company’s valuation, brand, and goodwill in a heartbeat or sabotage a political candidate’s good name.”
For example, in June 2019, a fake video of Facebook CEO Mark Zuckerberg was posted to Instagram that showed him giving a speech about the power of Facebook and saying things that he never actually said.
The problem is aggravated by the asymmetry of cost to impact. “You don’t need to be a nation state or have deep resources to take advantage of technologies and tools for generating fake media,” Ben Lorica, chief data scientist at O’Reilly Media, told SecurityWeek. “Furthermore, many platforms have tools that enable virality. So not only is it easier to create fake content, it is much easier for it to spread.”
Price added, “Creating a deepfake video is already inexpensive using cloud services — perhaps $100 to $500. And the vast majority of the code that you need is already open-sourced and fairly well-packaged.”
To make the asymmetry worse, Lorica added, “The father of digital-forensics, Hany Farid notes: ‘The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1’.”
Price and ZeroFOX are among those working on deepfake detection. The company announced the addition of deepfake detection to its security platform in August 2019. Price explained the existing technological difficulties in creating compelling deepfake videos, and how its detection capabilities looks for authenticity clues.
“On the offensive side,” he told SecurityWeek, “one of the key things you have to get right in order to create a high quality deepfake video is to get your AI training dataset right. So, for whoever you’re trying to deepfake, that would mean making sure you grab the right images.”
People blink while they are talking, he explained. But people don’t blink in the photos they post on the internet. To cover that, he continued, the attacker needs some other source for the target, with blinking. “Depending on how public the target is, this could be easy or difficult. Generally, to find an image of the target with eyes closed, you have to find at least one video of that person, and then extract the frames where they are actually blinking. If the target only has still photos, they’re not likely to be blinking or have their eyes closed.” It follows, then, that the higher the profile of the target, the easier it will be to create a deepfake video. The more private the target, the harder it will be.
“A second difficulty,” he continued, “is that you need to make sure that the lighting conditions and the background are at least somewhat similar to the target video. If lighting conditions are off, or the background scenes don’t line up, it tends to make it somewhat obvious that the resulting video is a fake because things simply don’t look right.”
Thirdly, he said, when you map specific areas of the target’s face to the fake video, it requires scaling and rotation. This, along with clues around superimposed blinking, is one of the key areas for detection. “Take the mouth,” he said. “The attacker will need to rotate and scale the mouth to fit the video image. When you do operations like scaling and rotation, there is evidence left behind within the pixels of the image that can indicate that the rotation has been done. This is the sort of thing we try to detect.”
But deepfakes are a new attack technique — and like everything else in cybersecurity, it has created a new game of leapfrog. The attackers are off the mark. Defenders, like ZeroFOX, are responding with detection capabilities. The attackers will also respond with attempts to defeat that detection. Nor do we yet know how deepfakes will be used in the future. The various applications of social engineering are obvious. But what about spin-offs. Could deepfake technology ultimately provide an ability to defeat facial biometric authentication?
“I could see it headed that way,” said Price. “I would say that right now, deepfake technology isn’t going to pass any biometric standards, but as we keep moving forwards and the technology continues to improve, that could become an issue.”
“While I have not seen tools or research papers that come close to hinting at deepfakes that excel at evading biometric authentication,” said Lorica, “there are a lot of organizations working on and investing in deepfake technology. So, it would be natural to expect that some groups are already working on this. I believe that the intelligence and security communities are alert to this possibility and are right to call for resources. We are in an arms race, and the amount of resources and people devoted to deepfakes dwarf their peers on the detection side.”
The one thing that is certain is that the deepfake threat is here for the foreseeable future, and will worsen in the short term.
Related: Misinformation Woes Could Multiply With ‘Deepfake’ Videos
Related: Black Hat 2019: Bounties, Breaches and Deepfakes, Oh My!
Related: Are AI and Machine Learning Just a Temporary Advantage to Defenders?
Related: Scammers Grab $2.5 Million From North Carolina County in BEC Scam