For a long time the focus on network protection was on the outside. The idea was that given enough data about threat types and patterns and feeding that to security devices at the perimeter would mean that the bad guys get stopped at the network’s door. Now we’re finding that the majority of our networks were breached long ago and they likely house the malware and communications paths that will eventually lead to successful compromise and theft of data. It’s time to take a long hard look inside our networks and understand where these bad actors have a footprint.
That’s a pretty bold statement to make isn’t it? That a breach occurred in your network long ago and the focus should be on finding it or them.
First, let’s understand whether this applies to you. Do you limit use of virtual machines to noncritical workloads and have vMotion turned off? Do you restrict users from connecting to the network with self-sourced devices, and if yes, do you restrict their access to guest network segments regardless of security profile? Is your network microsegmented with firewalls and Intrusion Prevention Systems (IPS) at all organizational and geographical boundaries? Do you monitor your network 24/7 and analyze traffic daily for anomalous pattern detection? Do you limit use of social media on your networks including restricting the ability to send attachments and files via the medium? Do you decrypt all SSL traffic and inspect it for the presence of malware?
If you answered NO to any one of these questions, then chances are pretty high that some form of malware has made its way onto your systems or those of your employees and contractors. Whether that malware has been leveraged for data theft or data exfiltration is another matter entirely. Some of these attacks are the type that persist, taking weeks to months to execute. Turning your eyes inward to detect where compromises occurred sooner rather than later has the potential to limit your risks substantially.
How did we get here? While complex threats are an easy place to lay blame, it is actually a lack of visibility that is the real culprit. Take virtual machines for instance. They are easily provisioned on a mouse click, often by non-security personnel. They can motion across physical hosts and may migrate to areas of low security where Internet connectivity is possible. Rarely are VMs provisioned with restrictive policies, so one VM infection can potentially spread to others and all the while, this movement or proliferation of malware will go unnoticed because the security applications that would uncover the infection simply can’t see the VM to VM traffic flows.
As with to VM to VM communications, SSL traffic in networks can be opaque with respect to security inspection. Since decryption is computationally intensive, a lot of security tools don’t support it, or security administrators denote it as “trusted” since there’s an authentication front end. Still, if malware is embedded in that traffic, it will go undetected and often does. It might seem unlikely that security administrators and stakeholders would forgo security checks that are easy to implement, but they often have to either for performance reasons or simply because they lack the technical and human resources to do the right thing. For instance, 24/7 monitoring and analytics is undoubtedly the best way to increase those odds of finding unwanted network traffic or malicious activity.
Without this sort of pervasive monitoring, what chance do network administrators have when the launching pads for attacks and advanced persistent threats are as voluminous and rapidly growing as these next-gen advancements and the number of mobile devices increases? Limiting the risk from these by fine-grained controls and restricted access is still an evolving technology space whose effectiveness is poised to improve. But today, your defenses are concentrated largely at the perimeter or at ad hoc segments in the vastly growing network with east/west, encrypted and mobile traffic carrying the biggest risk and subject to the least scrutiny. This has to change.
Where to start is obviously the key question here and there are no easy answers, but here is some general guidance.
Figure 1: Security to kill chain mapping courtesy of Gigamon
Study the cyber attacker kill chain to understand how attackers leverage the network for lateral movement and impersonation in order to find critical data and exfiltrate it. Take an inventory of your existing security technologies and operational tactics. Do you have something deployed to disrupt every step of the kill chain? How much risk would be mitigated if you closed some of the areas where you have deployment holes? Is your network architected to maximize use of these technologies by feeding them all of the data required for proper detection, response and analysis depending on the tool type?
For most organizations, the answer to this last bit is no. To raise security tool efficacy for insider threat detection, consider redeploying your security devices, appliances and applications on a visibility fabric. A six-step process can remake your security architecture into an inside threat detection platform with fewer false positives and higher catch rates.
1. TAP all critical links. Don’t rely on SPAN ports because of sampling and missed packets
2. Connect all TAPs to a visibility fabric. This will aggregate traffic and metadata
3. Connect inline tools to inline fabric ports. Adding fault tolerance for IPSes and firewalls prevents the fail closed problem.
4. Connect all out of band security tools. Now all analytics and detection tools will see every network packet and its metadata without contending with its peers.
5. Use traffic manipulation and grooming. Steering the right traffic to security tools can alleviate the computational burden associated with unwanted traffic inspection
6. Add non-security tools to the visibility fabric. Performance management tools can also have the benefit of complete network traffic views for faster troubleshooting.
Prevention focused security architectures had their place and evolved with the times of trust models and network demarcation points. They now need their constituent on the inside of networks where assumption of compromise has to guide the design. Which parts of the kill chain to focus on and what technologies and processes make the most sense will vary by organization. What is not in question however is that pervasive network visibility is now ground zero for network security.