“A bruise is a lesson”- Arya Stark, A Game of Thrones
If we can learn one thing from the notable string of beaches that have been reported publicly in the past few years, the widespread movement of bad actors across the data center – and the “dwell time” hackers have gained in these environments – has been a principal contributor to the degree of damage organizations have suffered. More dwell time (weeks, months, years) and more access translates to more damage.
The ability of bad actors to not only access critical assets, but to move nearly freely across the data center undetected remains somewhat mind-boggling as we forecast cybersecurity expenditures to approach close to $100 Billion in the next few years. In the next few years, we will be spending a virtual Ecuador on cybersecurity without being able to stop bad actors from free access to the interior of networks and applications.
Organizations of any size can do one important thing to help address this challenge: better segment their interior networks and data center operations. This is the critical strategy most IT and Security teams have not undertaken to date. Even today, the vast bulk of network security spend and attention is still focused on the perimeter. Indeed, earlier this year, Rob Joyce, the Chief of Tailored Access Operations of the NSA, gave a stunning talk at the USENIX Enigma conference with one simple message: segment everything that is critical.
So why haven’t organizations moved to address this? Why not think like a hacker and take actions that counter their moves?
It turns out microsegmentation based on traditional network technology – switches, routers, firewalls, etc., – while highly valuable, adds additional complexity and risk to data center operations. It’s hard to do. Several large organizations I have worked with over the past few years have – literally – millions of firewall rules based on IP addresses, making their security policy only second to the federal tax code in terms of complexity and fragility. How can that translate into the interior of the data center?
So, can’t we take a lesson from the hackers? The bad guys do not go after your data center all at once. They tend to look for a single vulnerability and then move out from there. This means broad-based segmentation will be limited in its effectiveness. Alternative, complementary approaches which drive segmentation closer to the application, even closer to a (physical or virtual) server can play a critical role in reducing the explosion of insider threats and spread of lateral attacks. If one workload/app is comprised and your other applications are not well segmented, the bad actor can spread rapidly (Target, OPM, and Sony breaches all started this way). Only segmentation down to the workload/application can prevent/reduce this risk.
This is particularly true now that applications have become more distributed, dynamic, heterogeneous, and hybrid. Said simply, many n-tier modern applications are no longer constructed to run a single rack on a single server. I have seen a single application span dozens of data centers. A single application. How can you protect that with a firewall?
Internal segmentation approaches must adapt to changes in environments (spin up, down, move) and increase the dynamic nature of new compute formats such as Linux containers that can literally run for only a few seconds to complete a process. And they must work in hybrid environments, where IT takes advantage of physical infrastructure they do not truly control.
The benefits of microsegmentation in the data center is a reduction in the attack surface for bad actors to communicate and move. By locking down all unauthorized communications, much of the probing that goes on can be similarly restricted, making it more difficult for a bad actor to spread laterally throughout the data center.