Organizations Need to Prepare for the Inescapable Future of IT
The landslide shift to the cloud has continued at a rapid pace over the last year. According to the SANS Institute, about 70 percent of companies are now using cloud-based architectures and/or applications. In fact, Cisco predicts that over half (56 percent) of all cloud workloads will be in the public cloud by 2019. IT has evidently accepted that the cloud is here to stay despite long-standing fears about security, lack of visibility, and a shortage of control that the cloud instigates—reflecting that elastic flexibility and cost savings win in the end.
However, the move to the public cloud can be cause for concern. There should never be such a thing as blind trust, which often happens with public clouds due to their inherent lack of transparency. Businesses need to, and should, be able to monitor their public cloud environments with the same robustness and vigor they apply to their on-premises and private cloud environments.
Unfortunately, this is not always the case. When a range of businesses were surveyed about their cloud environments, it was found that only 37 percent monitored their virtualized environments to the same standard as their physical environments. This is incredible, given that 67 percent of survey respondents deployed business-critical applications in their public cloud. Functions like communication, collaboration, storage, employee payroll and human resources applications, even recovery and backup services, are being moved to the cloud, and cannot go unprotected as they interact and transmit sensitive information.
Here are several current misguided cloud strategies, and some tips that can help curb the problem before it is too late.
Cloud Strategies Can Inherently Foster Risk
1. Do You Know How Far Your Blindness Goes? – Collaboration services, productivity software, email, large file transfers and customer relationship management systems are the top workloads moved to the cloud. Each of these applications involves processing or storing sensitive data within the cloud environment—increasing risk due to a lack of visibility. Has IT assessed how many cloud-based applications are being used, how data is flowing off-premises, and how data is transferred?
2. How Exposed is Your Data and Compliance? – The application workloads mentioned above contain business intelligence, financial data, employee records and customers’ personal information, and are all often processed or stored in the cloud. The lack of visibility into these workloads makes compliance a major concern due to financial, healthcare and other regulations.
3. What is Being Monitored and Tested? – Oftentimes, an organization’s core infrastructure runs in the cloud, like phone systems which use VoIP or CRM/HRM/Operations automation. This makes it harder to effectively monitor, ensure availability, and provide SLAs on performance. The ability to test any of these systems’ functionality, how they work in one specific organization’s architecture, or how to monitor them in operation, is very limited. It is essential to have an end-to-end assessment of what IT can deliver on a system-level SLA.
4. Do You Understand the Limits of Multi-Tenant Environments? – When running operations in a private cloud, it is easy to monitor each tenant, as organizations can see what is theirs. When running operations in a public cloud, the SaaS service provider does not want everyone accessing other tenants’ information and vice-versa. Depending on how the cloud service provider addresses confidentiality, integrity and availability of tenant workloads, this could increase an organization’s attack surface, and risk compromise of sensitive customer data, compliance, and their customer’s SLA.
Getting Ahead of the Problem
Until cloud vendors figure out how to safely allow external monitoring and security, there are some things in an enterprise’s control that will improve security:
1. Extract and copy traffic of interest from the cloud for more detailed inspection. This can be done easily with a cloud TAP or packet capture agent. The key is to make sure the cloud TAP/agent is not having a negative impact on cloud application performance.
2. Gaining access to the traffic is the first step, but then it is important to know where to send it. Tunneling the cloud data back to the enterprise allows the cloud traffic to be treated just like on-premises traffic. Once there, it is highly recommended to use a network packet broker (NPB) to aggregate and intelligently distribute the traffic. Alternately, the packet processing capability can be virtualized and implemented in the cloud.
3. Being able to identify traffic by application source allows organizations to make better choices when monitoring and securing. In the absence of this capability, traffic needs to go to all tools, which makes scaling more expensive. An intelligent cloud packet processing agent can do that as part of its function.
4. Each cloud application is an independent entity with its own SLA. And a ‘system’ is a product of its parts. When calculating total customer SLA, include the SLAs of each provider. When opting to go with the cloud, know—and actively monitor—each cloud SLA.
The move to the public cloud is inevitable. However, to fully realize its benefit, it is vital that the same due diligence applied to a physical network is applied to a cloud-based infrastructure. Organizations need to prepare for the inescapable future of IT.