In today’s complex environments that comprise on-premise and cloud systems, enterprises need the right security data at the right time to ensure that systems remain secure and within regulatory compliance.
Enterprise IT infrastructures are in greater flux today than ever before. For more than 20 years, networks remained (relatively) the same. They consisted of servers that fed data to employee desktops and notebooks. Today, that status quo, as we all know, is being rattled by emerging technologies such as virtualization, cloud computing, and consumerized devices that give users access to data around the clock.
It goes without saying that security needs to keep up. Unfortunately, that’s easier said than done. What enterprises need is a way to manage risk across their physical, virtualized, and cloud environments. That requires information about user identities to transcend on-premise devices and software, virtualized images, and their new cloud environments. It also demands that data – what users are doing across all of these systems – be available and understood.
This requires two classes of information from all of these environments: security event data and identity and access data. The identity data is necessary to understand who is accessing what resources, and from where. To fully understand an enterprise’s risk posture, identity information isn’t enough. Security managers also need the ability to see security-related information across firewalls, log management tools, vulnerability scans, and other systems.
When put to use, these data can be turned into information that will detail the risk faced by the entire IT environment. For instance, such data can be used to find usage patterns of applications access across on-premise and cloud-based systems. And, over time, enterprises will be able to normalize this data and get a baseline for how applications and network segments and various clouds should look. With these data in hand, it’s now possible to baseline activities to look for outlier events, or potentially suspicious behavior. For instance, if for the past two years Alice only logged into her applications from the corporate LAN between 8 am and 5 pm, we’d know something was potentially amiss if she started suddenly accessing her applications at 11 pm from across the country. It would be doubly suspicious if she unexpectedly started trying to log-in to applications for which she doesn’t have access. In fact, the chances that this isn’t Alice – but in reality an attacker – increase with every unusual activity spotted.
When networks were simpler than they are today, it was much more feasible to gather these data to gain a base understanding of the environment. Critical applications were on a handful of servers. Users accessed enterprise resources through a local area network, or tightly controlled virtual private networks. Not anymore. With virtualization, many servers are on a single host. With cloud, data and applications are spread about through clouds and SaaS-delivered applications. And with mobile devices, users are accessing these data from anywhere – home, remote offices, airport and coffee shop hotspots – anywhere they can find a connection.
This makes the ability to identify potentially malicious behavior more difficult, as well as more important, than ever. It’s more difficult because the data are spread throughout so many different systems. And it’s more important because all of this complexity makes it easier for attackers to find openings to slither on through.
Frankly, because of the amount of data and the speed it moves, the only way to solve that challenge is through a Security Information Event Management (SIEM) program. When all of the data are collected by the SIEM, environmental baselines can be built across physical, virtual, and cloud environments for rapid detection of anomalous activities. And, once baselines are built, they can be customized specifically for the business role the systems play. For instance, systems that support credit card payments can be optimized for the Payment Card Industry Data Security Standard (PCI DSS). If those systems handle financial data, the baseline and anomaly detections can be optimized for Sarbanes-Oxley.
For instance, with the PCI DSS, the system would learn how often reports are pulled, and what systems and people request them. The same would be true for financial data related to Sarbanes-Oxley. Then, once something happens out of the ordinary – often in real time – administrators and security managers can be notified to investigate. In certain circumstances, automated security controls could be put into place.
In this way, by trending all of this security event data, organizations move beyond simple event correlation (where you have to know what you are looking for to find it) to rapidly identifying events that occur out of the norm. These could be the signs of something bad going down, or actions that are out of security or regulatory compliance policy. And (as we will cover in our next article on this subject), this environmental baseline and ability to trend data makes it possible to model the environment and even predict when things might go wrong.
Yes, enterprise systems are in a state of flux. With the rapid adoption of virtualization and the increased move toward cloud, that’s not likely to change any time soon. However, organizations can’t let their security and regulatory compliance demands slip. Only by understanding how a system should act and be protected – and then carefully monitoring event data – can you be assured that those systems are as secure as they’re meant to be, and within regulatory compliance mandates.