The idea of viewing security from the endpoint out is part of having defense-in-depth, or layered security. This has a couple of pretty-straightforward advantages. First, external attackers must compromise multiple layers to get at protected systems and data. Second, it also helps defend against attacks originating from within the perimeter (think of a classic worm arriving in an email, infecting a single system, and then spreading across the network, safely behind the perimeter). This is in contrast to earlier security models which generally favored perimeter security, with little segmentation behind the perimeter, and little security on the endpoint.
When we consider security in a public cloud, we may be forced to work our way from the endpoint out; and so it’s back-to-basics. Organizations with the means can use Virtual Private Cloud which has the public cloud acts as an extension of their private network. In that way, perimeter security can be recreated in the public cloud, or external traffic can be routed through the existing on-premise perimeter. An organization that’s taking advantage of Virtual Private Cloud will typically have a large secure connection between the public cloud and private parts of their network. That connection can be expensive enough to put the option of routing traffic through an existing perimeter to workloads running in a public cloud, and back, out of reach.
If, then, external traffic goes directly to public cloud instances, there is an inherent need for security to be applied to the traffic. The security is likely to reside in the same public cloud as the instances. Most public cloud offerings include the basics; you can configure a firewall policy to determine which external traffic can access your public cloud instances. Most of the time the rest is up to you (here I’m sticking with a ‘typical’ public cloud offering – there are certainly public cloud offerings geared toward security!).
When it comes to offerings, Amazon has done a great job with AWS Marketplace. There are plenty of options, especially for perimeter security via UTM’s and other appliances. This is because what was an on-premise appliance can be ported to an appliance instance without changing too much within the appliance itself (assuming there aren’t hardware-specific requirements, as with the higher-end dedicated traffic inspection devices). It’s also relatively easy to get the billing to work – if the appliance is running, you pay the hourly rate. If you shut-off your instances for a period of time, you also shut-off the appliance, and you’re not paying it.
However, the story doesn’t end there, and for the same reasons that it doesn’t end at the perimeter of your on-premise environment. You may also have very few instances, and/or not much experience or desire to set-up a virtual appliance. Some folks simply don’t trust the perimeter since they don’t control it. After-all, it has been outsourced as part of the cloud service, or maybe they don’t completely trust their own perimeter and therefore practice defense-in-depth. Whether it’s for practical reasons or best practices, it’s important that each endpoint instance be able to defend itself. In a sense, with public cloud we are back to thinking from the endpoint out.
The wrinkle with endpoint security is, as I pointed-out in an earlier column, the licensing and billing aren’t as clean as with dedicated appliances. As mentioned, with the appliance, it’s either running (and you’re paying hourly) or it’s not. Traditional endpoint security, especially anti-malware, relies on counting endpoints and licensing based on the number. With on-premise systems, it makes sense to pay by the year, perhaps getting as granular as a month. The model doesn’t make sense with public cloud. You need the same “pay-as-you-go” granularity as with the instances themselves.
At this point, a quick note is important. There is sometimes a perception that public cloud vendors take responsibility for what is running within the endpoint, at least to one degree or another. While it is in their best interests to closely monitor the network (if one of your systems does get infected with a worm, you can bet it will be identified and isolated very quickly) they aren’t responsible for the integrity of the operating system and applications themselves. Whether it’s patching, rights administration, anti-malware, or any other basic endpoint security element – it’s up to the organization using the endpoint, not the cloud provider. If your system is vulnerable, it’s your problem (they really don’t appreciate vulnerability scanning over their network – for obvious reasons). Having a careful read of license and service-level agreements is important so you understand where the cloud provider responsibilities end and yours begin.
A good what-if exercise is to ask yourself what you would do to configure and protect a brand-new Windows server on any network. Of course, you’ll carefully control network access. You will be sure to keep on top of patching (operating system and applications), have stringent user policies (don’t run an application as administrator – you wouldn’t do that on your own network, I hope!), and likely install some sort of anti-malware. If you have a multi-tiered application, you’ll isolate the database from the application server, and so on. Really, it’s back to the basics.
In-fact, the multi-layered, zero trust (in Forrester’s parlance), endpoint-out model is what you should be using within your own datacenter. What public cloud is doing is forcing organizations to follow this model since the perceived risk is higher when the infrastructure and perimeter are in someone else’s hands. If the lessons learned, practices, and processes from public cloud are retrofitted to on-premise assets, then all the better.