For many years network security has taken something of a primarily reactive, top-down mentality to dealing with threats. For example, when a new malicious widget emerges, then the security industry spins up new anti-widget products to stop them. These technologies obviously have their place, and few of us would consider securing a network without IPS and anti-virus capabilities. However, most organizations lack a comparable bottoms-up strategy to proactively identify all traffic and determine if it is appropriate.
Typically if traffic uses an approved port, and it’s not a specific known threat, then the traffic is assumed to be ok. Hackers make a living (quite literally) by taking advantage of assumptions, like this example, made by IT and this case is no exception. By using protocols that are hard to control or hard to analyze, malware can easily avoid traditional negative controls and blend in with all the other “assumed good” traffic. This is a problem that demands a new strategy.
It isn’t enough to simply buy more mousetraps if we don’t also clean the house. In terms of network security, that means a return to identifying and providing positive control of all traffic on the network. When we take such an approach traffic isn’t assumed to be good, it’s confirmed to be good, and there are fewer places for malware to hide.
Controlling peer-to-peer (P2P) traffic may seem like old hat given that many security teams have been battling P2P for years now. Peer-to-peer is a conduit for pirated content and a notorious source of malware. Making matters worse, P2P has become a protocol of choice for botnet command-and-control traffic, making the botnet highly resilient and incredibly difficult to take down.
In response, the security industry offered up a variety of “anti-P2P” solutions and signatures to specifically find and stop P2P. Solutions were deployed, logs were generated and, in general, the problem was thought to be under control.
The only problem is that the strategy doesn’t appear to be working. According to a recent report based on petabytes of real network traffic from thousands of organizations, P2P traffic is actually growing in the enterprise, and at a far faster rate than web-based file transfers. In fact, P2P consumed more than 60 times the amount of bandwidth on enterprise networks compared to web-based applications. This leads to the belief that something is not working.
One of the key problems is that P2P technologies are highly dynamic and resilient. Resilient in that they can easily survive the loss of any or many endpoints, and dynamic in the sense that P2P technologies are not dependent on any particular port, and will spread many sessions across many ports to transfer information. As a result, an IPS log saying that you have detected and blocked P2P doesn’t mean that P2P is actually under control - you may have simply detected and blocked a single head of the hydra.
To realistically control P2P, you must classify all of your traffic at the application level across all ports. Once you have established visibility, you can limit P2P traffic to a specific approved application, and the ability to only allow for a select few users who have an approved need. This should drastically reduce the footprint of P2P in the network, and any new attempts can be tracked as a policy violation or a potential malware infection.
The job of classifying all traffic across all ports is an involved process and will typically integrate the use of signatures, decoders and heuristics to identify known applications and protocols. So when traffic is put through this sort of rigorous analysis and still not recognized, then it should be cause for investigation.
When first deploying a next-generation firewall, most organizations will have some level of unknown traffic on their network, which typically falls within three categories. It is either a newly released application that simply hasn’t been classified, a custom application developed internally by the organization, or it is malware traffic.
From experience, the last two options are by far the most common, and both would require action from the security team. Malware will often employ custom protocols both to perform unique actions that the malware requires, but also to avoid many of the patterns and signatures used by security solutions. And the use of non-standard protocols is by no means a corner-case. In a recent analysis of network traffic from more than 10,000 newly detected malware samples, just under 25% were observed to generate “unknown” traffic. Needless to say this is significantly higher than the average for any network, and shows that this area is ripe for control.
In the case of internally developed applications, a next-generation firewall should provide the ability for IT to create custom identifiers for the internal application. This not only provides better protection for any proprietary applications in the organization, but also has the effect of reducing the amount of unknown traffic on the network. Over time traffic that presents as unknown can be denied by default and in effect remove a critical hiding spot for malware.
Establish Appropriate Application Behavior
This provides a very long-term approach to managing threats that goes beyond simply buying anti-threat products. If we can establish appropriate behavior for our applications and strong baselines, we will have the context for seeing those things in our network that don’t belong, and that will be valuable regardless of what the next malicious widget turns out to be.