Visibility and Control over as Much Network Traffic as Possible will Leave Malware with Fewer Places to Hide.
Now that we are looking in the right places, we need to know what to look for. As covered in a previous column, modern malware depends upon its ability to communicate with a remote attacker while hiding or blending in with our normal allowed traffic. However, we can often detect this ongoing command-and-control traffic and other telltale signs of malware infections if we know what we are looking for.
First, the more innovative IPS and threat prevention solutions have added signatures specifically for detecting command-and–control traffic. This provides a fairly straightforward way to identify an infected host.
We can also extend our search for malicious traffic by knowing some of the tricks of how malware communicates. A well-known example is to look for unexpected IRC traffic, or IRC traffic on non-standard ports (malware is notorious for using IRC to send command-and-control). Additionally, we should look at download histories on a per-user or per-machine basis. Is the user repeatedly visiting the same site and trying to download a file? Such repetitive actions can be signs of a bot at work as opposed to a flesh and blood end user.
In a similar vein, we may even want to watch for web-browsing requests targeting a specific IP address instead of domain. Your average end-user probably rarely types out IP addresses into the browser, while bots are often programmed with IP addresses. Again, here the key is to look for the ways that bots behave as opposed to how people behave.
Also, look for any unapproved method that could be used to obscure traffic. This includes custom encryption and unapproved proxies. TDL-4, the latest iteration of the Alureon botnet, uses both unique encryption and even installs a proxy onto infected hosts in order to hide and anonymize traffic. If you start to see this kind of traffic pop up on your network, it will typically lead you to a misbehaving user or a user who is infected with malware. Also be on the lookout for unexpected use of dynamic DNS. Malware will often use dynamic DNS to muddy the water and make their actions harder to trace.
Into the Unknown Unknowns
While on the subject of things that don’t belong in the network, we should begin to take a look at the unknown or unclassified traffic in our networks.
For years now, we typically only identified traffic based on what port it used and the source and destination IP address. As a classification technology, this is a pretty crude approach given that modern apps (malware included) can hop from port to port or tunnel within allowed traffic. This made it easy for malware to blend in with the rest of the network traffic. The root of the problem is that we needed a more reliable and definitive approach to classifying network traffic.
Next-generation firewalls (NGFWs) were initially invented in order to solve this classification problem. To do so they needed to analyze all traffic regardless of port or evasion and needed to progressively decode the traffic to distinguish one application from another. With this accomplished, security teams went from seeing undifferentiated blobs of traffic flowing through ports to seeing specific applications going to specific users. This was, and continues to be, a somewhat revelatory experience for many IT teams. However, this visibility uncovered some things that were unexpected.
Now that we actually could see what was on our network it highlighted the outlying traffic that was “unknown” or that couldn’t be classified. In many cases some of this traffic could be excluded and identified as internally developed applications. But once this traffic was accounted for, some unknown traffic would still remain, and this traffic was regularly found to be unknown malware and botnets operating in the network.
This has become a true game-changer for network security, and it doesn’t stop simply with tracking unknown traffic types. The better and more granular our classifications become, the more significant the unknowns are. For instance, an unknown or unclassified URL is significant, because anything that hasn’t been classified by a respectable URL filtering solution could indicate a very newly registered domain. This is important because attackers and bot-herders regularly rotate their command-and-control infrastructure to avoid being traced, meaning that they are constantly popping up new sites to communicate. Similarly, unknown or proprietary encryption can be indicative of malware. Seeing unknown traffic going to an uncategorized URL should put teams on high alert.
These steps should provide a good starting point for getting control over malware, both in terms of preventing infections as well as rooting out malware that may already be in your network. The overarching focus is to regain visibility and control over as much of your traffic as possible, and if there are unknowns, then find out what they are, whether that means sandboxing unknown files, or performing packet captures on unknown traffic.
The more you can see, and the finer your controls, the fewer places there will be for malware to hide.