Security Experts:

Why Traditional Endpoint Security Will burst your Cloud Bubble

What is it about public cloud that breaks traditional endpoint security? First, let’s consider how traditional endpoint anti-malware ended-up where it is today. In the early days, anti-malware was applied computer-by-computer. We still do that, for the most part, with our home computers today. As systems became more connected, and the concept of datacenters grew, the end-user operating system of the age made the leap to servers, much to the chagrin of Novell. As the prevalence of x86 computing in datacenters grew, and networking (and internetworking) became commonplace, worms, Trojans, and viruses started screaming across and between computing estates.

With the idea of perimeters (inside versus the rest of the world) and organizations taking computing projects from isolated islands to company-wide efforts, the concept of centralized management emerged. Anti-virus vendors created neat little management consoles so organizations could monitor the status of their estate, and apply updates and upgrades across the environment. With centralized management, conventional wisdom moved to charge for a total number of endpoints that could be protected from a management console, and built-in the cost of the management piece.

Virtualization SecuritySince most datacenters were fairly static, an organization could do a reasonable job of predicting the number of endpoints that needed protection a year or three in advance. Licensing met this demand, with the tweak of ‘true-up’ to accommodate growth. However, adding new hardware, with one operating system on each machine, limited wild fluctuations in the number of protected endpoints.

This worked for quite some time. Then along came x86 virtualization, and soon after, public cloud computing. Whether private or public, endpoint security management consoles have broken under the weight of cloud.

The root of the problem is pretty simple; legacy management is built for legacy environments, which tend to be fairly static. It took some effort to add new systems and remove old ones. Virtualization in the datacenter changed that drastically. In a relatively short amount of time, servers were being added and removed daily. This was a boon to development and testing teams, who could easily instantiate and destroy tens, hundreds, or thousands of VMs in a day.

Leveraging existing image-centric legacy technology, vendors figured-out how accept new VMs spawned from a template. At the other end of the VM lifecycle, however, dealing with long-gone VMs has been an ongoing problem. Here, legacy algorithms, roughly ‘If I haven’t heard from the machine in a week or so, I’ll delete it’ became problematic. This could be especially tricky if the same management console was used for frequently at-large laptops. Some vendors have sorted-out this problem by tapping into vCenter or XenServer to find-out if a VM is really gone, paused, or stopped.

However, licensing also became problematic with datacenter virtualization. One day, an organization may have 1000 VMs, 1200 the next day, and 900 the day after. This problem was solved by the odd vendor by charging based on computing power; roughly, per-CPU or hypervisor. This has worked across servers and end-user VMs (VDI) since traditionally, per-server security costs more than per-desktop/laptop.

Putting performance issues aside, and concentrating on management consoles and licensing, vendors have declared ‘problem solved’ via workarounds applied to legacy management consoles. Okay, performance issues can’t be ignored when it comes to legacy endpoint security, but I have only so many column inches!

Somewhere along the line, as organizations were virtualizing their datacenters, some likely and seemingly unlikely (an interesting video and post is here) IT people started not just virtualizing their own, but also renting per-VM bits of their datacenters to others. Roughly speaking, that’s public cloud (aka – Infrastructure as a Service). This is where legacy management consoles, and their built-in licensing schemes, have downright fallen-over.

Public cloud takes the highly dynamic nature of a virtualized datacenter and adds a twist; you pay for your VMs by the hour. This has been yet another recent boon for development, testing, support, and many other IT teams. Moving to a virtualized datacenter gave them the ability to instantiate and destroy VMs all-but instantly, but they still needed hardware to run the VMs on. They had greater flexibility, could squeeze every CPU cycle out of the hardware that they had, but didn’t actually have any more horsepower (especially important if you’re load testing your latest management console, for example).

With public cloud, teams don’t have to pay high capital costs up-front for hardware that they really need only once per release cycle. Instead, they pay for the peaks, and put away their credit cards during the valleys. Legacy anti-malware consoles don’t ‘get it’, though. If you need to run your endpoint security on those cloud instances, you’re in a bind. With a legacy management console, you can certainly pay for the maximum number of instances that you anticipate needing, but you’ll have to pay for a year. With public cloud, per-CPU licensing is useless, so the private virtualization workaround doesn’t fix things.

Many organizations today are leveraging public cloud. Most are either building new projects on, or transitioning appropriate workloads to, public cloud; few are at or near one-hundred percent public cloud usage (start-ups, especially IT start-ups, and Software-as-a-Service providers are a group that is especially well-suited for public cloud). These organizations then also have internal virtualized estates, and so imagine the problem if they are applying endpoint security to both public and private VMs.

A good start is to look for a vendor who provides both virtualization and public-cloud solutions. With internal virtualization, look for something that is integrated with whichever virtualization management you use (probably vCenter or XenServer today), and has flexible licensing (per-CPU, per-VM server/end-user). Look for a vendor who also has public cloud endpoint security that is available as a service (licensed per-hour per-instance) and integrated with any APIs that your public cloud vendor has available (think of it as a vCenter in-the-cloud; it’s a must!). Ideally, internal and external computing (cloud, traditional, mobile, internal virtualized) will be wrapped into a single console, but I’m not aware of a vendor who has yet bridged that particularly challenging divide.

Good hunting!

view counter
Shaun Donaldson is Director of Alliances at Bitdefender Enterprise. Shaun is responsible for supporting relationships with technology alliance partners and large enterprise customers. Before joining Bitdefender, Mr. Donaldson was involved in various technology alliances, enterprise sales and marketing positions within the IT security industry, including Trend Micro, Entrust, Bell Security Solutions and Third Brigade.