Now on Demand Ransomware Resilience & Recovery Summit - All Sessions Available
Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Cybercrime

Cyber Insights 2022: Adversarial AI

Ransomware Insights: 2022

Ransomware Insights: 2022

Adversarial AI is a bit like quantum computing – we know it’s coming, and we know it will be dramatic. The difference is that Adversarial AI is already happening and will increase in quantity and quality over the next couple of years.

Adversarial AI – or the use of artificial intelligence and machine learning within offensive cyber activity – comes in two flavors: attacks that use AI and attacks against AI. Both are already in use, albeit so far only embryonic use.

An example of the former could be the use of deepfakes as part of a BEC scam. An example of the latter could be poisoning the data underlying AI decisions so that wrong conclusions are drawn. Neither will be as common as traditional software attacks – but when they occur, the effect will be severe.

“The biggest difference compared to attacks on software is that AI will be responsible for more advanced and expensive decisions just by its nature,” comments Alex Polyakov, CEO and founder of Adversa.AI. “In 2022, attacks on AI will be less common than traditional attacks on software – at least in the short term – but will definitely be responsible for higher losses. Every existing category of vulnerability in AI such as evasion, poisoning and extraction can lead to catastrophic effects. What if a self-driving car could be attacked by an evasion attack and cause death? Or what if financial models could be poisoned with the wrong data?”

Known Threats (Using AI)

Targeted malware

Attacks that use AI are already possible and in some cases in use. The potential for AI-based malware was demonstrated by IBM in the summer of 2018. It published details of what it called DeepLocker.

“DeepLocker,” IBM told SecurityWeek, “uses AI to hide any malicious payload invisibly within a benign, popular application – for example, any popular web conferencing application.” But then it goes further. It could use facial recognition to trigger its payload if and only if a predefined target is recognized. The application is limited, but the principles are clear.

Advertisement. Scroll to continue reading.

AI will also be used to generate new malware. “Researchers at both Google and OpenAI,” warns Sophos, “independently demonstrated that researchers can leverage neural networks to produce source code based on unstructured, natural language instructions. Such demonstrations suggest that it is only a matter of time before adversaries adopt neural networks to reduce the cost of generating novel or highly variable malware.”

Deepfakes

Malicious use of AIDeepfakes are an example of adversarial AI already in use. They can be used for finely targeted spear phishing or more likely as a support for financially motivated business email compromise (BEC) frauds. The technology can be used to accurately alter the voice and appearance of an actor to sound and look like that of a specific CEO. 

This has already occurred. In the summer of 2021, a bank manager received a call from a company director whose voice he (falsely) recognized. The in-coming call explained that funds needed to be transferred for an acquisition. The request was supported by apparently legitimate emails. The bank manager agreed and is thought to have transferred $35 million – of which only $400,000 is known to have been recovered.

[ READBecoming Elon Musk – the Danger of Artificial Intelligence ]

“As part of our threat intelligence research,” says Yotam Katz, product manager at IntSights, “we’ve been tracking ‘hacker chatter’ around deepfakes on the dark web. We’ve seen more discussions about deepfakes over the last few years.” It has grown from around 40 posts in 2019 to around 100 posts in 2021.

“While not yet a widespread, established threat,” he continues, “this rise in activity indicates that more cybercriminals are becoming interested in deepfakes. The more we see deepfakes being talked about, the higher the chances are that we’ll see more deepfake attacks in 2022.”

His colleague at IntSights, Alon Arvatz, adds, “We predict that threat actors will look to monetize the use of deepfakes by starting to offer deep-fake-as-a-service, providing less skilled or knowledgeable hackers with the tools to leverage these attacks through just the click of a button and a small payment.”

Generative Adversarial Networks

An evolution of the deepfake is the generative adversarial network (GAN) that can synthesize completely fabricated images. “While we have not yet seen widespread adversary adoption of these new technologies,” comments Sophos, “we can expect to in the coming years – for example, in the generation of watering-hole attack web content and phishing emails.”

GANs can generate new animation using, for example, a selfie and a target video rather than overlaying the two. The technology is already available commercially in the app called ‘Reface’. “In our vision,” comments Reface co-founder Roman Mogylnyi, “we see the app as a personalization platform where people will be able to live different lives during their one lifetime. So, everyone can be anyone.”

The malicious use of such technology in fraud is obvious. Reface makes strenuous efforts to stop its product being used maliciously – but the basic technology exists and is available to criminals. 

As recently as December 2021, the video selfie, as a form of MFA identity verification, got a vote of confidence from identity firm Onfido. “Biometric verification provides more protection against fraud than document verification alone — and a video selfie check provides superior protection over a photo selfie check,” said Sarah Munro, director of biometrics.

If video selfies become increasingly used for identity verification, we will see a commensurate rise in the use of GANs to facilitate fraud. Other uses are likely to include disinformation campaigns and spoofed social media accounts.

Expected Threats (Abusing AI)

Cybersecurity disruption

Although AI is a powerful cyber defense mechanism with its ability to detect and connect minor network anomalies, it comes with its own pitfalls – it can behave in unexpected ways in something known as ‘the brittleness of AI’. “With small untraceable variations in the training data, it can behave erratically,” explains Hany Abdel-Khalik, an associate professor at Purdue University and computational scientist specializing in data analytics. “This very brittleness,” he told SecurityWeek, “can be exploited by the attackers to mislead/disrupt the operation of the system.”

The danger of this type of attack against AI (rather than with AI) is that it is very difficult to detect once it is in place. The defensive AI might have been poisoned into accepting certain malicious activity as benign activity – but as far as the SOC is concerned, the AI defense is working normally.

“It is a highly sophisticated attack method, and one cybercriminals are undoubtedly already using stealthily to target organizations,” warns Brooks Wallace, VP EMEA at Deep Instinct. “Due to the complexity of the attack, once the SOC team has identified a potential issue, it is often already too late. The extra dwell time this attack gives to the threat actors, the more opportunity they have to move throughout the network, inflicting more and more damage as they go.”

Polyakov adds, “We expect an explosion in attacks on AI, but it may be there won’t be much news about it, because those issues are hard to detect. Currently, knowledge of such threats among cybersecurity experts is very limited,” he said. “Even if breaches do happen, most companies won’t be able to detect them until things become catastrophic because most companies don’t have even basic MLSecOps and AI trust and risk management processes.”

A variation on this type of attack will be used to fool types of MFA. “AI may be used to detect patterns in watermarks and fingerprints,” explains Abdel-Khalik. “Once the AI algorithm learns, the latter may be successfully forged to bypass authentication. Adversaries may exploit this to misguide systems by feeding false data to sensors that are inadvertently authenticated due to the forged fingerprint.”

National security disruption

Just as AI has application outside of cybersecurity, so will adversarial AI spread to other areas, including national security. “AI techniques may be used to reverse-engineer sensor data to find the underlying model and decipher sensitive parameters of systems such as nuclear reactors,” comments Abdel Khalik. 

“Analysis of the spectral data collected from nuclear debris can be used to back-trace the source of the nuclear weapons,” he continued. “Analysis of the spectral data of existing pile of nuclear weapons can be used to reverse engineer the proprietary designs of the weapons. In stockpile weapons treaties, there is a need to inspect the weapons to ensure they have been dismantled, however the same process used to perform the inspection can be used to reverse engineer the design of the weapon – which makes it difficult for countries to reach agreements on how to inspect the weapons.”

In mathematical and engineering applications, recent advances in AI could be used to uncover underlying physical laws of nature from the data through differential equations, giving a clue to the identity of the system. “This trend is expected to increase in the near future,” he added, “potentially allowing AI to uncover details about the system which are considered sensitive. Critical industry stakeholders – such as oil and gas, nuclear and chemical, electric power grid, water treatment and so on – aiming to explore the potential of AI to benefit their operation, will all be facing a major conundrum. On the one hand, AI can help improve/secure their system operation, but on the other hand, the very same AI can be turned against them.”

Adversarial AI in 2022

The conundrum faced by the critical industries applies to all uses of artificial intelligence: it is a double-edged sword. It has become a marketing requirement for new cybersecurity products to claim the inclusion of machine learning and AI. But wherever AI is used for good, it can be subverted for bad. Like most areas of cybersecurity, there is an arms race between attackers and defenders.

For the last few years, AI has given defenders the edge. But nation state adversaries and the better-resourced criminal gangs are well able to fund their own research and development into adversarial AI.

2022 may be the first year that the adversaries have the AI edge.

About SecurityWeek Cyber Insights 2022

Cyber Insights 2022 is a series of articles examining the potential evolution of threats over the new year and beyond. Six primary threat areas are discussed:

• Ransomware

• Adversarial AI

• Supply Chain 

• Nation States

• Identity

• Improving Criminal Sophistication

Although the subjects have been separated, the attacks will rarely occur in isolation. Nation state and supply chain attacks will often be linked ‒ as will supply chain and ransomware. Adversarial AI will likely be seen primarily in attacks against identity; at least in the short term. And underlying everything is the growing sophistication and professionalism of the cybercriminal. 

SecurityWeek spoke with dozens of security experts and received almost a hundred suggestions for the series. 

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Bill Dunnion has joined telecommunications giant Mitel as Chief Information Security Officer.

MSSP Dataprise has appointed Nima Khamooshi as Vice President of Cybersecurity.

Backup and recovery firm Keepit has hired Kim Larsen as CISO.

More People On The Move

Expert Insights

Related Content

Cybercrime

A recently disclosed vBulletin vulnerability, which had a zero-day status for roughly two days last week, was exploited in a hacker attack targeting the...

Cybercrime

The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions.

Cybercrime

As it evolves, web3 will contain and increase all the security issues of web2 – and perhaps add a few more.

Identity & Access

Zero trust is not a replacement for identity and access management (IAM), but is the extension of IAM principles from people to everyone and...

Cybercrime

Luxury retailer Neiman Marcus Group informed some customers last week that their online accounts had been breached by hackers.

Cybercrime

Zendesk is informing customers about a data breach that started with an SMS phishing campaign targeting the company’s employees.

Cybercrime

Patch Tuesday: Microsoft calls attention to a series of zero-day remote code execution attacks hitting its Office productivity suite.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.