There have been many warnings of the rising cybersecurity threat from deepfakes, but little hard evidence that the threat is current. SecurityWeek spoke to Nasir Memon, an IEEE Fellow and NYU professor to understand the current state and future significance of deepfakes.
‘Deepfake’ refers to the synthetic generation of a human being in an environment that is able to interact in a live manner with a real human, probably over a video communication channel. With this definition, true deepfakes do not yet exist.
The current state of deepfakes
Menon believes the quality of deepfakes is growing but is not good enough to be a significant threat today. “But we’re getting there,” he said. “We could be talking on Zoom, but the technology is making me look far more presentable than I am – so you don’t see that I haven’t shaved for a couple of days and am wearing a ragged tee-shirt.”
The next level, he suggested, is where he is sitting in Hawaii while one of his graduate students, looking and sounding exactly like himself, is actually conducting the interview. “We’re getting there, as well,” he said.
The reason that deepfakes are not yet being used by the cybercriminals is threefold: the technology is still in development, the bad guys haven’t yet figured a way to monetize the process, and criminals are lazy. “They just go and steal money from the bank that can be most easily broken into – but don’t underestimate their ingenuity for the future,” he added.
Driving forces behind development
The two driving forces behind any technological advance are nation state intelligence and defense agencies, and the businesses who do see a way to monetize technology. When the technology is adequate and works, crime follows.
“It’s hard to lock technology in a bottle,” said Memon. “The genie gets out. Technology spreads easily. Digital technology, especially when it’s in the form of code and data, spreads very, very rapidly.”
We don’t know what use the intelligence agencies have in mind, but private businesses and agencies are already coming together. Memon mentioned a presentation by Nvidia to DARPA, where Nvidia would use a tool to clean up images to provide better appearance in realtime videoconference calls.
The entertainment industry is another business driving the development of deepfake technology, although for now this is not primarily realtime, live deepfakes. “But, instead of editing I can just clean things up. I can do so many things that will make content creation so much easier if I don’t have to retake and retake and retake. I just clean up what I’ve got.”
So, with legitimate business driving the technology at the core of deepfakes, and with the inevitable leakage of that data, Memon is confident that criminals will get and use deepfake technology in the future.
The most common, shall we say non-legitimate, use of deepfakes is for parody and misinformation. In some jurisdictions this may be illegal and possibly criminal, but is not what we consider cybercriminal behavior. Nor does it display the final evolution of deepfake technology – it tends to be pre-recorded and not realtime.
A relatively short step from this is the use of celebrity deepfakes for scamming purposes. The deepfake could be a pre-recording of the celebrity image using social engineering to fool the victim into sending money to a fake charity bank account controlled by the scammer. The same process could be used in attempted business scams similar to BEC attacks. But in neither case is the full, eventual capability of realtime interaction involved.
Nevertheless, Memon doesn’t think we should be complacent. He recalled the time when he was a graduate student. It was the era of the Morris Worm and a few viruses that were developed to show off the hacker’s personal skills. A few people were concerned, but most people thought it was just kids playing. It’s not as if hacking will ever become mainstream…
Memon gives another example that will evolve as deepfake interactivity improves. “I’m an educator,” said Memon. “We give exams. Much of our teaching is done online. You could hire someone, do a deepfake of yourself and let that person take the exam for you.” Not a big deal, he suggested, but it opens the whole question of fake interviews as the technology improves. Foreign governments might use this technique to place a subversive insider into critical industries, or even attempt to place a sleeper inside government agencies.
This capability is not far off. Memon recently talked to a senior member in a bank. The bank already has a full-time analyst working on what might be expected soon, and what needs to be done to keep both their customers and staff safe. Memon was given the following example:
“A deepfake of an important customer – say Tom Cruise – calls the bank and asks for $1 million to be transferred elsewhere. The bank (probably a relatively low-level clerk) asks for a password – at which point the deepfake gets a bit short tempered. ‘Come on, man. Don’t bullshit me. You can see who I am and I’m in a hurry. Make my transfer or I’ll take my account elsewhere.’” We already know how easily people tend to succumb to social engineering – and this quality of deepfake is not far away.
Detection, defense and the future
If high quality realtime deepfakes are close, the question then becomes one of defense – how can business detect and defend against such deepfakes? Memon believes there are some helpful tactics. One is to break the deepfake itself, using a captcha-like challenge/response mechanism.
“Captchas are based on the premise that certain tasks that humans can do very easily, are difficult if not impossible for an algorithm to do – at least without a huge amount of computation and sophistication. It’s hard because human vision is a miracle that cannot yet be matched by technology. Although AI is getting us closer to that, the way things stand, if I just do this…”
He waved his hand across his face.
“… all current deepfakes just totally break. So, there’s certain, what we call challenge response mechanisms, that we could develop. If you asked me to stand up and sit down again, deepfake will die.” But going back to the bank’s Tom Cruise hypothesis, it would be difficult for the bank clerk to ask ‘Tom Cruise’ to jump up and down to prove himself.
There are other mechanisms that could be used. “Adobe,” said Memon, “is developing mechanisms for embedding digital signatures into content that says it hasn’t been changed. Videoconferencing companies can develop proofs of source that might not tell you whether the image is genuine, but at least confirms it is coming from the location you expect. Cameras could embed some kind of watermark or fingerprint at the time of filming, which says, ‘Hey, I’m putting some secret in here. And if your end doesn’t receive it, you know there’s a problem.”
But there is no silver bullet. “I think we need a very holistic approach to address the problem of interactive deepfakes,” he continued. This will require a combination of technology, regulation and user awareness. “It will evolve over time. We’re not simply going to sit down and let deepfakes destroy our world. We will develop these techniques over time.”
But here we should remember the ingenuity of the criminals. With the current state of semi-static deepfakes, it’s not worth their time when there are so many easier targets and techniques. This will change as realtime deepfakery becomes feasible. So, the big question is whether at that time, will we be able to get ahead and stay ahead of deepfake criminality?
“No,” said Memon. “If you want a blunt answer, we will see greater deepfake crimes. I don’t think the good guys can stay ahead with just what they are doing today.” Remember the two things he said earlier – realtime sophisticated deepfake technology will just be another genie that escapes from the bottle; and never underestimate the ingenuity of the cybercriminals.
Related: Cyber Insights 2022: Adversarial AI