Security Experts:

Making Sure Your Customers Aren't Causing Storms in Your Cloud

In Public Cloud Environments, CIOs Can’t Count on Basic Firewalls and Other Mechanisms That Protect the Outer Layer of the Whole Cloud.

A cloud infrastructure, no matter how expansive, needs to be planned in such a way that means customers can’t cause issues for other customers. This should go without saying, yet CIOs continue to struggle with keeping each entity segregated and ultimately safe and sound in their own little (or large) portion of the environment. In shared infrastructures, just one breach within the environment could open the door for hackers to wreak havoc on every entity inside.

Cloud Hosting Infrastructure SecurityAt last October’s Hacker Halted conference in Miami, Stach & Liu, a security consultancy firm, advised companies to simply not store sensitive data in the cloud. They reported that access codes and security keys could be found using a simple Google Search. To avoid cloud infrastructure entirely is a heavy recommendation. The findings in their report are valid, and eye opening, but there are steps that can be taken to protect data once inside the cloud. It need not be so drastic as to avoid the cloud entirely.

Some cloud vendors have taken on the practice of vetting customers before letting them in, like a bouncer at the door. This is an important first step, but there’s still work to do once the well-vetted customers are let in. CIOs simply can’t count on the basic firewalls and other mechanisms that protect the outer layer of the whole cloud. To do so would be overly trustworthy and dangerous – both on the part of the cloud vendor and the organizations that purchase the resources.

Several approaches exist that eliminate the sole dependency on these traditional firewalls. One is to segregate entities/customers that share the cloud into their own, isolated environments. Each individual environment should then be protected by its own private VLAN, customer specific CIDR blocks (Ips), or stateful inspection firewalls. Without segregation mechanisms in place, every organization within the database is wide open.

Additionally, providers need to be able to ensure that modifications to one environment or shared security device don’t directly or indirectly affect the security posture of another customer. In a public cloud environment, for instance, firewalls and other security devices are typically shared. By utilizing the right equipment with the right functionality, policies can be created and modified to ensure change only to the single, intended entity/customer. It is also possible to make changes to those devices where all customers in the public cloud are affected. Both options are can be beneficial, depending on the situation. For instance, by creating a single policy on a shared web application firewall to protect web applications from a zero-day threat, all of the cloud tenants are instantly protected regardless of whether or not they know about the particular vulnerability.

Finally, customer-to-customer communication can be a major issue. If you’re a cloud vendor, bet on the likelihood that some of your customers are patronizing each other, this is just a natural calculation based on odds. If you’re a company that parks any piece of your data set on a public cloud, you should anticipate this as well. Why? Because malicious attackers anticipate this. The answer is to make sure customers can’t directly access each other from within the environment. They literally need to be routed outside of the cloud infrastructure, then back in. This is the only way you can be sure that everyone accessing infrastructure resources are subject to the same security measures regardless of their origin. The toughest part of implementing this can be similar to building a house. It’s not impossible to add pipes under your home’s foundation after the concrete has cured, but it’s far more expensive and time consuming than laying the pipes before you build on top of them. Not only that, but your home’s foundation will never have the same integrity as it did before. It’s extremely difficult to shoehorn these types of security measures into an established production network without affecting the network’s integrity briefly or even indefinitely. Thinking of these types of security measures and implementing them from the ground up can provide a solid foundation as you build on top of it.

Warnings like the one from the Stach & Lui report (PDF) are important in helping the cloud industry understand what threats exist and further standardize security measures. They also help companies know what to ask when talking to cloud vendors. Those that create the right policies, enforce segregation, and, as always, continuously monitor traffic can confidently keep every customer safe from each other.

Subscribe to the SecurityWeek Email Briefing
view counter
Chris Hinkley is a Senior Security Engineer at FireHost where he maintains and configures network security devices, and develops policies and procedures to secure customer servers and websites. Hinkley has been with FireHost since the company’s inception. In his various roles within the organization, he’s serviced hundreds of customer servers, including Windows and Linux, and overseen the security of hosting environments to meet PCI, HIPAA and other compliance guidelines.