Network Security

Using Machine Learning for Red Team Vs Blue Team Wargames

Deep in the DNA of threat detection firm Endgame is the twin concept of stealth and offense. This is contrary to the usual view of information security that often portrays itself as visible and defensive.

<p><span><span><span style="font-family: &quot;trebuchet ms&quot;, geneva;"><span>Deep in the DNA of threat detection firm <a href="http://www.securityweek.com/endgame-raises-30-million-bring-military-grade-cyber-tools-enterprise">Endgame</a> is the twin concept of stealth and offense. This is contrary to the usual view of information security that often portrays itself as visible and defensive.

Deep in the DNA of threat detection firm Endgame is the twin concept of stealth and offense. This is contrary to the usual view of information security that often portrays itself as visible and defensive. Endgame, however, believes that defenders need to accept the methods of the modern adversary (the stealthy attack), and adapt as required.

That adversary looks for signs of defense and then alters its behavior to avoid or negate the defense. Examples include all of the malware that looks for signs of virtualization and then either tries to avoid sandboxes or simply cuts and runs to avoid analysis. Another example is Caspar, which would examine Windows 10’s AntiVirusProduct WMI Class to see what anti-malware products are registered; and would then take steps to bypass or avoid that anti-malware. Endgame suggests that only with defensive stealth can you prevent the adversary from recognizing and avoiding detection by hiding in similar security control blind-spots.

The offensive nature of Endgame is more common. Defenders should accept that traditional security defenses will not prevent targeted, stealthy, advanced attacks — and should therefor actively look for, or hunt out, subtle indicators of compromise on their networks. The purpose is to minimize the adversaries’ dwell time, or time-to-detection — which according to FireEye’s Mandiant currently stands at 146 days. The way to find them is through big data anomaly detection, which is a hugely labor-intensive task. In fact, it realistically best achieved by using machine learning (ML) techniques to allow the computer itself to detect and correlate those anomalies.

In short, threat detection should involve ML capabilities actively but stealthily hunting the adversary. But stealth itself is a process rather than a feature. The attacker doesn’t just attack, he also continuously reconnoiters and probes the defense looking for weaknesses. To beat the attacker, the defender needs to find its own weaknesses, and close them, before the attacker can find and use them. The problem is how to achieve this.

At the BSides Las Vegas gathering this week, Endgame’s principal data scientist Hyrum Anderson proposed a new approach in a presentation titled ‘Deep Adversarial Architectures for Detecting (and Generating!) Maliciousness‘ — or, as it could be subtitled, ‘Physician, heal thyself!’.

The concept is simple, although the mathematics is not. If ML can train your computer to detect subtle advanced threats then you can equally use ML to test and probe your own defense. Anderson describes the development and use of an ML Red Team designed to attack the ML Blue Team that is your in-house threat detection system. 

In his presentation, Anderson explained how you can “use deep learning against itself in a red-team Vs blue-team adversarial game, by which both models improve The Red Team learns to generate maliciousness, and the Blue Team learns to differentiate malicious from benign events.”

It involves teaching the Red Team to understand how adversaries operate. With that understanding, the Red Team attacks the Blue Team. If any attack succeeds, then lessons learned are built into the Blue Team’s defensive algorithms, and a loophole or blind-spot is closed. Where attacks fail, lessons learned are built into the Red Team’s algorithms; and the process starts again.

Anderson describes it as a ‘non-cooperative game theoretic framework.’

Advertisement. Scroll to continue reading.

“Through a series of rounds, the Red Team model generates malicious samples intended to find blind spots in the Blue Team detector model. In turn, the Blue Team detector model learns to patch the blind spots discovered by the Red Team model. As these two models compete, the generator’s ability to produce adversarial samples that bypass defenses improves. Meanwhile, the detector becomes more robust to adversarial attacks that are simulated by the generator.” In short, the best way to improve machine learning threat detection is through machine learning simulated attacks.

Separately, it was announced yesterday that Accenture and Endgame are combining to offer ‘threat detection as a service’. This service will use Accenture’s experienced threat hunters with Endgame’s technology. “Rather than building a taller defensive wall,” explained Accenture’s Vikram Desai, “we’re giving our clients the ability to strike first – to stop adversaries before they attack.”

“We need to compress adversary dwell time,” added Nate Fick, CEO of Endgame, “by vigorously hunting across the enterprise architecture and terminating malicious behavior before it can get too far. Endgame and Accenture’s joint solution [will] deliver an always on, end-to-end hunt solution that simply outsmarts traditional indicators of compromise and signature-based tools.”

Endgame, which historically has been known for selling tools and zero-day exploits to government customers for offensive purposes, began shifting its focus to sell its military-grade security intelligence and analytics platform to enterprise customers over the past few years.

*Correction: Presentation was at BSidesLV, not Black Hat.

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version