Security Experts:

An Occam’s Razor for Security, Part 2

“Defense is attack, attack is defense, each being the cause and result of the other.” Bruce Lee

In my previous column, An Occam’s Razor for Security, I made the argument that simply adding more security technology into the heart of the data center (and public cloud) does not logically equate to a safer environment. I would actually posit the opposite: complexity, which adding additional infrastructure frequently causes, is one of the enemies of security. 

The network security industry especially symbolizes this situation.  An entire generation of firewall technology has spawned an awkward sub-industry of rule management.  Firewall infrastructures can spawn millions of rules in the largest enterprise deployments and require the security equivalent of federal tax preparers to slowly and painfully untangle rule/policy sprawl.  If your security operations require that you live with an enormous lack of clarity and require “experts” to do simple tasks, maybe it’s time to think again.

In security, extracting simplicity is more valuable than mastering complexity.  With the evolution to cloud-centric architectures and distributed applications, however, security architectures that are built on top of network hierarchies—all networks need hierarchies or they will crash—run in contradistinction to the increasingly dynamic and distributed nature of modern computing.  Physical or virtual chokepoints built for North-South traffic and additional “fabrics” all create crazy hairpin traffic-steering nightmares and architectures to solve the intra- and inter-application security requirements of today’s world, all the while simultaneously trying to deal with the increasing number of temporal software “components” such as Linux containers.  This situation demands a rethinking of security and network architecture to deal with distributed computing.

And there is one more thing, the increasing cyber threat inside of data center and cloud environments means that security controls must be placed closer to the data, not at the perimeter.  We need to make the cyber attack kill chain longer and more difficult to traverse for bad actors.  Having a weakly protected development workload on the same network segment as a high-value database is a potential nightmare waiting to happen.

In the spirit of Occam’s Razor, it is important to understand the short list of actions that can reduce cyber incursions and the lateral spread of attacks: adaptive segmentation at the compute layer.  Drawing tighter and tighter boundaries across applications or tiers of applications makes it more difficult for bad actors to operate and spread across data center environments—without the operational burden, traffic steering, and cost of chokepoint technologies.

Taking this observation a step further, the defenses to guard dynamic computing need to be built deep into the heart of the data center itself. These defenses must include the following properties:

  • Dynamically monitoring every server and application;
  • Performing unobtrusively while adding little to no operational overhead;
  • Minimizing propagation of attacks at the most granular layer;
  • Quickly dealing with any violations of security policies; and
  • Allowing the compute layer participate in its own defense.

This last point is critical.  If security can increasingly be distributed into the compute layer—effectively a form of self-protection—we begin to shift the playing field from attacker to defenders.  Imagine if each element of your IT stack became security aware—with its own firewall, its own alert system—you would be creating a kind of immune system.  Immune systems cannot completely defeat infections and diseases, but they make it more difficult for them to cause damage.  Imagine if your security approach operated the same way.

This can help turn the tables.  If you have 10,000 compute instances—servers, VMs, containers—in your data center and cloud, you now can have 10,000 points of visibility and enforcement to counter the lateral spread of attacks.  It’s like the old phrase about pets and cattle: when you had 10 servers, you treated them like pets—at any given time, you knew what they were, what they did, and if something tampered with them.  Simple, right?  When you have 10,000 servers, you treat them like cattle, constantly shuttling the traffic among them through central gates (choke points). If one cow starts to call out, you do not notice in the herd. 

Activating the security capabilities you already have can be more valuable than simply adding something new—Occam’s Razor. 

view counter
Alan S. Cohen is chief commercial officer and a board member at Illumio. He leads Illumio’s go-to-market strategy and customer engagement life cycle organizations, including marketing, support, talent and IT. He is a 25-year technology veteran known for company building and new-market-creation experience. Alan’s prior two companies, Airespace (acquired by Cisco) and Nicira (acquired by VMware), were the market leaders in centralized WLANs and network virtualization, respectively. He also is an advisor to several security companies, including Netskope and Vera.