Network Security

The Truth About Micro-Segmentation: It’s Not About the Network (Part 1)

Never confuse a marketecture from an architecture. Marketecture is how to simplify a company’s technology to represent what a product can do. 

<p><span><span style="font-family: &amp;quot;"><strong><span>Never confuse a marketecture from an architecture. Marketecture is how to simplify a company’s technology to represent what a product can do. </span></strong></span></span></p>

Never confuse a marketecture from an architecture. Marketecture is how to simplify a company’s technology to represent what a product can do. 

By contrast, architectures represent the overall components, interrelationships and operations of products. It is how things really work. And the devil is in the details, especially when it comes to micro-segmentation.

Since VMware introduced the concept of micro-segmentation for data center security about about three years ago, the security and networking industry have been racing to introduce competing technologies to reduce the lateral spread of bad actors in the data center and cloud. One of the key insights of VMware’s NSX team – which my team fully subscribes to – is that traditional networking technology presents a boatload of limitations to implementing micro-segmentation at scale. As they noted in August 2014:

The idea of using network micro-segmentation to limit lateral traffic isn’t new, but until recently, it was never feasible. Even if you blanketed your data center with a legion of hardware firewalls, there’d be no way to operationalize them, and the costs would be astronomical. Until now. 

A purely network-centric approach to micro-segmentation cannot operate at scale or deliver complete data center and cloud security. Traffic steering, service-chaining, and requiring proprietary network operations falls apart in today’s dynamic, distributed, heterogeneous and distributed computing operations. The proprietary network chokepoint/enforcement point model re-introduces the complexity of client server technology into the cloud world. When a vendor talks about throughput in the context of security in today’s hybrid cloud world, they are reaching for the past, not the future.

The big “aha” for security and networking teams is not that segmentation will support better data center hygiene – they already knew that. What is different is that network segmentation is related to, security segmentation, but is not the same. While network segmentation and security segmentation both introduce forms of isolation, they were built for different purposes:

● Network segmentation was designed initially to create smaller networks (subnets) to reduce performance considerations such as layer 2 broadcast storm, and then later to create isolation. It is built on top of IP addressing.

● Security segmentation focuses on the policy model of applications: should applications and application components be allowed to communicate and is built on top of data/workload tagging.

A great example of this is the failure of network technology to allow a server to live in multiple dimensions. Can a database serve two different applications that live on different network segments?

Advertisement. Scroll to continue reading.

Don’t Sell Me Micro When You Mean Macro

The original segmentation model for the data center was the network security perimeter firewall. Because it was a single chokepoint that can process a blacklist model at line speed, it has been manifested in hardware devices with increasing levels of capacity and throughput. Network devices do a good job of coarse grain, micro-segmentation, not only for the perimeter, but for well-defined zones that include environmental separation in relatively static and well-defined boundaries.

Where networking fails – and this includes the network stack in the hypervisor or containers – is where you need the more granular security segmentation of micro-segmentation. As you move to ringfence applications, tiers of applications or individual workloads, the network and hypervisor model both lacks the context and the flexibility to do the job. What happens if an application spans several data centers? Would you hairpin traffic back to an enforcement point? Even worse, what happens when some organizations have dozens and dozens of data centers that support a single application?

While the network- and hypervisor-centric versions of micro-segmentation do a fine job of “micro-segmentation” (i.e., read environmental segmentation), they then become complex and operationally stultified when you move to true micro-segmentation.

This becomes self-evident as we move to the need to segment processes or individual ports at the workload or container level. How would you use a network to segment across 10,000 dynamic ports in Active Directory? How does it work at the container level?

Said simply: networks are great for macro-segmentation, but software-centric approaches are required for micro-segmentation. 

The dynamic and distributed data center/cloud world is leaving the client-server network-centric security model behind. It’s easy to change a marketecture but nearly impossible to change an architecture.

In part 2 of Micro-segmentation Misdirection, I cover the difference between Visibility and Application Dependency Mapping for micro-segmentation.

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version