Security Experts:

Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Network Security

Enterprise Security’s Operations Problem – Panic in the Data Center

Every time we go through one of these panic-patch cycles – think Heartbleed and Shellshock – the security industry laments about difficultly of the whole process.

Every time we go through one of these panic-patch cycles – think Heartbleed and Shellshock – the security industry laments about difficultly of the whole process. It’s a virtual certainty that there will be complaints about how “bugs like this” will be around forever because they’re difficult to patch. I’ve yet to see anyone truly address the root cause of this, and I believe I know why.

The issue is that when we go through these scenarios, the operational problems we encounter are rarely exclusively security related. Patching has evolved from a security responsibility, to more of an IT operations issue, and while security teams certainly play a part that role is rapidly changing to a governance function. Before we apply patches we must stage test systems then implement and validate those test systems—which potentially includes regression testing – before we deploy to production systems. The responsibility for those tasks is part IT operations, part security, and part applications teams – it’s a complex symphony that when things don’t work perfectly they don’t work.

Data Center Security OperationsThe situation is a little more complicated than that, even. If I’m honest, the biggest operational enterprise gap exposed by panic-patch cycles like this is asset management. Think about it. What is the first thing 80 percent or more of the organizations that heard about the Heartbleed bug did? Answer: they scanned their environment to see if (and where) they had a problem. Ladies and Gentlemen of the jury, this is not a good strategy.

Right about now you’re thinking to yourself that this is yet another one of my posts about why Information Technology Infrastructure Library (ITIL) fundamentals are so important if anyone in enterprise security is going to be successful at all. And you’re absolutely correct. But fundamentals are hard for many organizations, apparently, and very few are actually any good at it. The easy out is to buy a blinking set of boxes or some new software that will magically take care of these problems for you. Except that it doesn’t work this way because of, well, humans.

Consider this. When Heartbleed was announced, and then subsequently the Bash bug, there were a number of security community contributors coding their own scanning tools and posting them to GitHub for everyone to use. This is quite the noble effort and worthy of applause, however, it put a spotlight on our big knowledge gap. Then solution vendors released their plug-ins and additions for identifying the vulnerability through their vulnerability scanning tools. What troubles me is not that these were released or even that they were necessary. That’s all great. What troubles me is that these tools were the primary mechanism much of the enterprise security teams used to identify where these vulnerable systems were on their network.

That’s absolutely crazy!

Enterprises that don’t operationalize configuration and asset management are doomed to repeat the cycle of lost productivity, frustration and panic. The panic that ensues when an organization identifies a major vulnerability and then has no choice but to find (or build) a tool to scan their environment to find the vulnerable assets should not become routine. Wouldn’t it be amazing if the primary mode for identifying these vulnerable assets was an asset database that was relatively complete with accurate data so they could simply dive in and find 75 percent of the known systems that have OpenSSL on them, for example? After they patched those systems, they then could break out the scanners and find the unknown vulnerable systems and add them into the asset and configuration management system? How crazy is that?

The reason this is important is that we are taking about incredibly high costs to productivity when security has to drop everything and go hunting and one-off patching. There are other costs, too, such as potential downtime induced by a one-off patch on a system that then fails because of the patch. The complexity escalates quickly, but I think you get the point.

Frankly, I’m shell-shocked (pun intended) that enterprise security teams aren’t focusing more on ITIL fundamentals and the asset and configuration management within their organizations. Managing change, or more accurately the risk as a result of change, is so core to the security mindset it surprises me that the industry and community at large is not tackling this problem more aggressively.

With that in mind, ask yourself how your organization fares every time another Heartbleed or big bash bug hits the wires. Do you panic, scan and one-off patch? Or do you dig into your asset and configuration databases, patch what you know, then look for the unknowns in a methodical manner? If you do the former, I suggest you look at change, soon. If you do the latter, I congratulate you because you’ve succeeded in understanding the underpinnings of sound security – sound ITIL and IT operations strategies.

Do you have a differing opinion? Or just want to share your anecdote or story? I encourage you to leave a comment here or send me a message on Twitter. I believe I have a wide view into the enterprise space, but, admittedly, I don’t have a completely representative sample of the entire security market. I invite you to join the discussion and share your experiences. I look forward to hearing about them.

Written By

Click to comment

Expert Insights

Related Content

Network Security

NSA publishes guidance to help system administrators identify and mitigate cyber risks associated with transitioning to IPv6.

Identity & Access

Hackers rarely hack in anymore. They log in using stolen, weak, default, or otherwise compromised credentials. That’s why it’s so critical to break the...

Cybersecurity Funding

Forward Networks, a company that provides network security and reliability solutions, has raised $50 million from several investors.

Network Security

Cisco patched a high-severity SQL injection vulnerability in Unified Communications Manager (CM) and Unified Communications Manager Session Management Edition (CM SME).

Application Security

Electric car maker Tesla is using the annual Pwn2Own hacker contest to incentivize security researchers to showcase complex exploit chains that can lead to...

Cybersecurity Funding

Network security provider Corsa Security last week announced that it has raised $10 million from Roadmap Capital. To date, the company has raised $50...

Network Security

Vulnerabilities identified in TP-Link and NetComm router models could be exploited to achieve remote code execution (RCE).

Security Infrastructure

XDR's fully loaded value to threat detection, investigation and response will only be realized when it is viewed as an architecture