Part One of Three In a Series On Protecting Virtualized Data Centers
VMWorld 2012 wrapped up a couple of weeks ago, and while at the conference, I met a number of data center virtualization and application professionals that I don’t normally run into at our security conferences. In fact, it’s interesting to note that there are now three key groups with very different areas of influence within the virtualized data center – the networking team, the data center server/app team and of course the security professionals. The challenge within the data center has migrated from the traditional tradeoffs between networking and security, primarily in the area of performance versus threat protection, to a new model where application is king. In this new application-centric model, when an application needs to be deployed, the complete framework of networking, storage, and security must be operationalized in a way where all the pieces required to deliver the application effectively, securely, and in a scalable manner work together as a system.
For example, when application “X” is instantiated on a virtual machine (VM), there are several key deployment considerations. From a networking perspective, there may be specific policies that dictate which VLANs the application needs to be placed in. If an application delivery controller (ADC) exists in the network, perhaps there are tweaks to the configuration of the ADC that will optimize the application. The storage requirements for this application including backup, recovery and disaster recovery need to be considered. Finally, security requirements will include setting the appropriate security policies for the application. These can range from safe application enablement policies by application, user and content, to foundational security features such as the types of interfaces, security zones and routing functions that will need to be provisioned.
Every aspect of the network, storage and security consideration is triggered by the application deployed, and at the same time there is dependency among the elements in this ecosystem. A great analogy is multiple cog wheels or gears working in tandem. The ability to engineer all the gears of the ecosystem in an efficient manner is what cloud computing as an operational model is all about.
Yet, the biggest challenge is security. As security professionals, we know how painful it can be to implement any sort of policy changes on your traditional firewalls. The support or trouble ticket you create for the IT department first has to be evaluated and approved. Then, you’re hunting around for the right firewall, and the right ports and protocols that the application supports. You’re probably adding a rule on top of all the obscure firewall rules that have been created in the past by IT administrators and that no one has removed because of fear of disabling access to some non-existent server. So you trudge on through countless mind-numbing policy changes or creations, and the application X that was created can finally be allowed through your traditional firewall days or weeks after the virtual machine was first instantiated. Not ideal!
In order to fully embrace the benefits of cloud computing as an operational model, the economics dictate that security must keep pace with automated and orchestrated workloads. Cloud computing is all about on-demand access, utilizing a common resource pool, with rapid elasticity, self-service, usage measurement. Security cannot become a barrier to this objective, therefore automation and orchestration capabilities are fundamental requirements of any security solution for the virtualized data center or cloud.
There are key differences between automation and orchestration for security. Automation is a set of steps to perform actions that are repetitive. Orchestration is the ability to tie the automated tasks to a larger business process goal. If you’ve watched the documentary “How It’s Made” on the Discovery Channel during insomnia bouts at night, you have probably (perhaps unknowingly) seen the differences between automation and orchestration played out. Automation is the process of reducing the time spent on repetitive tasks. For example, on one episode the show demonstrates making Hostess Twinkies by filling in a giant automated vat that spits out precise measured ingredients, versus manually stirring different ingredients together. Orchestration, on the other hand, is the process whereby you can adjust the number of Twinkies made based on business demand. This in turn adjusts the amount of ingredients for filling and optimizes other automated Twinkie-making tasks.
Selecting a security solution where features can be easily enabled via tight integration with orchestration software is critical to unlock the potential of cloud. There are multiple options for your security offering to integrate with - vendors like BMC and CA, and startups like RightScale and enStratus that deliver management and orchestration across multiple clouds. Virtualization platform vendors like VMware and Microsoft also offer a management and orchestration suite. The key for all of these management and orchestration platforms is the ability for each role within the data center to configure unique policies that can be orchestrated as one. For example, Application X security policies should continue to be defined by the security admin, while the networking policies should be defined by the networking or infrastructure admin. This means… yes, all three groups will still need to work together to ensure that they haven’t automated and orchestrated a set of policies that will lead to an unwanted event. With automation, it is virtually impossible to test every single possible scenario, and there will be times when cascading failures will occur, as outlined by Amazon and Google.
The goal to strive to is simplicity. The example we cited earlier about tweaking ports- and protocol-based firewall policies is archaic, complicated, and will inevitably lead to failures. In fact, we’ve already established that these port- and protocol-based network security appliances exhibit a lack of visibility into the traffic in the data center, cannot safely enable applications, do not effectively protect against threats and degrade in performance as more and more features are enabled. Yet security offerings within virtualization platform architectures today are virtualized versions of port- and protocol-based security appliances. Offering a plethora of virtualized security appliances on a virtualized platform is analogous to bolting on multiple firewall helpers behind the firewall at the aggregation layer of the physical data center. Security policies become convoluted, policy gaps appear and grow, threats are missed, and inadequate performance forces compromises to be made.
In part 2 of this series, I’ll discuss the security requirements beyond automation and orchestration - what stays the same and what changes as you move towards virtualization and cloud. Specifically, there are new critical characteristics your firewall must exhibit but other requirements stay the same. In part 3 of the series, I’ll address the choice between physical and virtual firewalls. There is a place for each, in fact each of them address different sets of requirements, yet more often than not, the questions I get around protecting virtualized data centers have only centered around virtual firewalls.