The shift to the cloud is well underway. The RightScale 2019 State of the Cloud Report finds that 84 percent of respondents have a multicloud strategy, with public cloud adoption at 91 percent.
It isn’t uncommon for an enterprise to have multiple data centers, several remote locations with servers or compute resources, as well as one or more public clouds and cloud providers. Businesses gain flexibility, agility, and cost savings, but they also face new performance and security challenges as applications and workloads move to the cloud.
Modern applications are dynamic and modular, using containerization, microservices and workload mobility technologies, with communication patterns between application components constantly changing. With agile methodologies, application development is occurring continuously, generating frequent software updates which can increase the risk of exposure to software vulnerabilities.
Traffic patterns are also evolving. Most of today’s data center traffic is now east-west where there is little to no segmentation. These trends are opening up new attack vectors and obscuring visibility needed to protect workloads.
To deal with the challenge, organizations use a range of point products for different use cases. For example, they may use separate tools for application discovery, policy enforcement and micro-segmentation, compliance and audit, security forensics, simulation, software vulnerability and process behavior.
IDC’s 2018 Platform Solution Survey finds that, on average, organizations use three or four point products to address workload integrity and protection, with some geographies reporting five or more. However, consistent with Cisco’s CISO Benchmark Study 2019 that found organizations are consolidating security vendors, IDC also finds growing enterprise interest in a more comprehensive and systematic approach. In fact, 72% indicated they were very interested in a platform-based approach with the most anticipated benefits being operational efficiencies, cost savings, and a “network effect” where the value of the platform multiplies as it is used for more use cases and sometimes by more users.
So how do you get started with a holistic approach to workload protection? If you’ve already implemented a segmentation program or have started a Zero Trust journey, you probably have a head start. As I’ve discussed previously, protecting workloads begins with visibility. You need a map of all of your applications, the services they are running, and their interdependencies to successfully secure them. You also need to understand the users and devices that require access to these services and to what extent, so you can create granular policies for each application to enable access and block everything else. With this information, you can baseline the normal behavior of your workloads which allows you to quickly identify anomalies and suspicious behaviors.
Let’s look at a common scenario. Service disruptions or a dip in application performance can happen for a variety of reasons. Troubleshooting these issues can take hours or days to resolve and involve multiple IT and security teams. The culprit can be an incorrect DNS setting, a failing front-end web server, a user-generated query that is caught in a loop and slowing down the entire environment, malicious activity or something else still. Sometimes the root cause is never identified, which leaves the distinct possibility that the problem could happen again.
With visibility into real-time and historical application connectivity, dependencies, and data flows – across a hybrid data center environment – you can quickly see exactly what is happening and why, so you can take corrective action. You can identify whether performance issues are attributable to the application or the network so you can remediate faster and improve availability and performance. The ability to continuously and automatically scour traffic and detect anomalies, allows you to quickly detect malicious behavior and malware for proactive mitigation, like quarantining services when vulnerabilities are detected and blocking communication in the case of policy violations. Instead of having to manually piece together information from disparate point solutions, a full lifecycle approach gives you visibility across your data centers, so you can protect any workload, anywhere.
When evaluating holistic, workload protection solutions, there are three aspects to consider.
1. Input and telemetry. The solution must collect telemetry and other context information from sensors deployed across your on-premise and cloud environments, as well as from third-party sources. These might include load balancers, location information systems as well as IP address management (IPAM), the configuration management database (CMDB), and Security Information and Event Management (SIEM) systems.
2. Big data analysis. The solution must be able to analyze the data using unsupervised machine learning and behavior analysis and store massive volumes of data and results for future reference and ongoing analysis.
3. Data access. You and your teams need to be able to access, search, query, and manipulate the data with third-party tools you already use or with applications you have created.
Applications are the lifeblood of digital business, but they are becoming increasingly difficult to protect in a hybrid cloud environment. With a holistic approach you can enable efficient segmentation across your infrastructure, identify anomalies faster by using process behavior deviations, and reduce your attack surface quickly as you correlate known vulnerabilities with installed software and take action.