As attackers have become better at evading traditional signatures and malware sandboxes, security teams are increasingly turning to behavior-based detection models to find the signs of an active cyber attack. This behavioral approach to finding threats comes with a lot of advantages. Behavioral detection models can focus in on what the attacker actually does, instead of relying on a set of signatures or known indicators of compromise that often lag behind attackers.
For example, while the perimeter IPS may have missed a drive-by-download, behavioral analytics could recognize that the victim end-user is starting to behave very strangely – perhaps trying to access abnormal resources or download an abnormal amount of files. This is actually exactly the sort of thing that the original intrusion detection systems were designed to do back in the 1980s.
However, we’d also be remiss if we didn’t remember why behavioral approaches to IDS fell out of favor in the first place. More often than not, analytics based on user behavior will identify anomalies as opposed to threats. Joe in accounting is downloading more data than he normally does, but is that a sign of an attack, or does Joe simply need to access a lot of data for a report he is working on?
This sort of user behavior modeling can let us know when something doesn’t seem normal, but it is often inconclusive and requires an analyst to go investigate. The shortage of time and talent in real-world security teams typically means that these sorts of anomaly-based detections become noise that ends up being ignored.
Refocusing on Attacker Behaviors
While detections based on end-user behaviors are extremely important, we need to complement them with better detections for attacker behaviors as well. By attacker behavior, I mean a return to detecting the tools and techniques of an attack. Ultimately, if we can’t distinguish what is good from bad, then anomalies will remain ambiguous noise that creates more work for overloaded analysts. Obviously relying on manual human analysis just doesn’t scale in most environments.
If you know what to look for, malicious tools and techniques have distinguishing behaviors that can be identified. For example, attackers will often rely on custom tunneling tools to control their attack. These tools are customized to bypass signatures and intelligence feeds. However, these tools also share a characteristic set of fundamental behaviors. The initial connection comes from an infected end-user device within the network, so that the traffic blends in with normal Internet traffic. With the connection established, the remote attacker can take over real-time control of the internal host to drive the attack. Behaviorally, this action stands out. The behavior of the connection is no longer that of an internal human talking to an external server. In fact the reverse is true – you have an external human controlling one of your network devices as a drone. This sort of behavior isn’t anomalous based on past behavior. It is a significant risk based on how it is actually behaving.
This is just one example, but the concept applies to all types of threats and can be invaluable streamlining the daily management of security events. A host may begin behaving anomalously by visiting unusual domains. However, being able to recognize the behavior of Bitcoin mining on the host lets an analyst know specifically what the issue is. This insight alone could help the analyst prioritize the event, and avoid time-consuming manual analysis.
The point here isn’t to say one approach is better than another, but rather to show that there is an important middle step between traditional signatures and anomaly detection. Behavior-based detection models can see the things that simple signatures miss, and can provide more clarity than only looking at anomalies. These are complimentary approaches that ideally need to work with one another in context. This gives us multiple perspectives to detect threats, and ultimately that is what we will keep us safe even as threats continue to evolve.