Artificial intelligence, usually in the form of machine learning (ML), is infosecurity’s current buzz. Many consider it will be the savior of the internet, able to defeat hackers and malware by learning and responding to their behavior in all-but real time. But others counsel caution: it is a great aid; but not a silver bullet.
The basic problem is that if machine learning can learn to detect malware, machine learning can learn to avoid detection by machine learning. This is a problem that exercises Hyrum Anderson, technical director of data science at Endgame.
At the BSides Las Vegas in August 2016 he presented his work on a ‘Deep Adversarial Architectures for Detecting (and Generating!) Maliciousness’. He described the concept of using red team vs blue team gaming, where a ‘malicious’ algorithm continually probes a defensive algorithm looking for weakness, and the defensive algorithm learns from the probes how to improve itself.
This week, at the Black Hat conference, Anderson takes the concept further in a presentation titled ‘Testing Machine Learning Malware Detection via Adversarial Probing’. The purpose is simple — to use machine learning to test and improve machine learning defenses. In reality, it is an important step in the continuing battle between attackers and defenders.
Omri Moyal, co-founder and VP of research at Minerva, explains. “Given the increased adoption of anti-malware products that use machine learning, most adversaries will soon arm themselves with the capabilities to evade it,” he told SecurityWeek. “The most sophisticated attackers will develop their own offensive models. Some will copy ideas and code from various publicly-available research papers and some will even use simple trial and error, or replicate the offensive efforts of another group. In this cat-and-mouse chase, the defenders should change their model to mark the evolved attack tool as malicious. A process which is the modern version of ‘malware signature’ but more complex.”
Anderson’s theories will help the defender to stay ahead of the attacker by being both cat and mouse. His Black Hat presentation starts with the understanding that “all machine learning models have blind spots or hallucination spots (modeling error).” At the same time, an advanced attacker knows what models are used by the defender; and can use his own ML to probe for those blind spots.
Moyal explained the implications for defenders. “Just like in previous generations of anti-virus software, attackers can constantly evaluate their malware against the machine learning model until a successful variant is created,” he told SecurityWeek. Malware authors have long tested new or repackaged malware against VirusTotal-like services to see whether it is likely to get past the defenders’ AV defenses. Now they will use ML to test their malware against the known ML defenses, seeking out the blind spots.
“The resulting specimen,” continued Moyal, “can be used against each victim whose protection relies on this model, offering the attacker a high degree of certainty the malicious program will not be detected. Attackers can also automate this process of generating malware that bypasses the model and even use offensive machine learning to improve this process.”
Anderson’s research is based on the idea of finding the blind spots and closing them before the attackers find them. Ironically, this can be achieved by doing exactly what the attackers will do — use machine learning to probe machine learning. This is nothing more than what security researchers have been doing for decades: probe software to find the weaknesses and get those weaknesses patched before they are found and exploited by the bad people.
In today’s presentation, Anderson describes a scientific approach on how to evade malware detection with an AI agent to compete against the malware detector. Although in this instance focusing on Windows PE, the framework is generic and can be used in other domains.
The agent examines a PE file and probes it to find a way to evade the malware detection model. The agent learns how to ‘beat’ the defense. However, as used by the defenders, this approach simply finds the blind spots that can then be fixed. Used solely by attackers, it finds the blind spots that can be exploited.
Anderson’s key takeaway is that machine learning anti malware just bought and installed will offer early success in malware protection, but it will quickly become porous against advanced adversaries. Staying one jump ahead of the bad guys has always been, and remains, the key to infosecurity even in the age of artificial intelligence.
Hyrum Anderson, Bobby Filar, and Phil Roth from Endgame, together with Anant Kharkar from the University of Virginia, have published an associated white paper: Evading Machine Learning Malware Detection (PDF).