Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Network Security

Enterprise Security’s Operations Problem – Panic in the Data Center

Every time we go through one of these panic-patch cycles – think Heartbleed and Shellshock – the security industry laments about difficultly of the whole process.

Every time we go through one of these panic-patch cycles – think Heartbleed and Shellshock – the security industry laments about difficultly of the whole process. It’s a virtual certainty that there will be complaints about how “bugs like this” will be around forever because they’re difficult to patch. I’ve yet to see anyone truly address the root cause of this, and I believe I know why.

The issue is that when we go through these scenarios, the operational problems we encounter are rarely exclusively security related. Patching has evolved from a security responsibility, to more of an IT operations issue, and while security teams certainly play a part that role is rapidly changing to a governance function. Before we apply patches we must stage test systems then implement and validate those test systems—which potentially includes regression testing – before we deploy to production systems. The responsibility for those tasks is part IT operations, part security, and part applications teams – it’s a complex symphony that when things don’t work perfectly they don’t work.

Data Center Security OperationsThe situation is a little more complicated than that, even. If I’m honest, the biggest operational enterprise gap exposed by panic-patch cycles like this is asset management. Think about it. What is the first thing 80 percent or more of the organizations that heard about the Heartbleed bug did? Answer: they scanned their environment to see if (and where) they had a problem. Ladies and Gentlemen of the jury, this is not a good strategy.

Right about now you’re thinking to yourself that this is yet another one of my posts about why Information Technology Infrastructure Library (ITIL) fundamentals are so important if anyone in enterprise security is going to be successful at all. And you’re absolutely correct. But fundamentals are hard for many organizations, apparently, and very few are actually any good at it. The easy out is to buy a blinking set of boxes or some new software that will magically take care of these problems for you. Except that it doesn’t work this way because of, well, humans.

Consider this. When Heartbleed was announced, and then subsequently the Bash bug, there were a number of security community contributors coding their own scanning tools and posting them to GitHub for everyone to use. This is quite the noble effort and worthy of applause, however, it put a spotlight on our big knowledge gap. Then solution vendors released their plug-ins and additions for identifying the vulnerability through their vulnerability scanning tools. What troubles me is not that these were released or even that they were necessary. That’s all great. What troubles me is that these tools were the primary mechanism much of the enterprise security teams used to identify where these vulnerable systems were on their network.

That’s absolutely crazy!

Enterprises that don’t operationalize configuration and asset management are doomed to repeat the cycle of lost productivity, frustration and panic. The panic that ensues when an organization identifies a major vulnerability and then has no choice but to find (or build) a tool to scan their environment to find the vulnerable assets should not become routine. Wouldn’t it be amazing if the primary mode for identifying these vulnerable assets was an asset database that was relatively complete with accurate data so they could simply dive in and find 75 percent of the known systems that have OpenSSL on them, for example? After they patched those systems, they then could break out the scanners and find the unknown vulnerable systems and add them into the asset and configuration management system? How crazy is that?

The reason this is important is that we are taking about incredibly high costs to productivity when security has to drop everything and go hunting and one-off patching. There are other costs, too, such as potential downtime induced by a one-off patch on a system that then fails because of the patch. The complexity escalates quickly, but I think you get the point.

Frankly, I’m shell-shocked (pun intended) that enterprise security teams aren’t focusing more on ITIL fundamentals and the asset and configuration management within their organizations. Managing change, or more accurately the risk as a result of change, is so core to the security mindset it surprises me that the industry and community at large is not tackling this problem more aggressively.

Advertisement. Scroll to continue reading.

With that in mind, ask yourself how your organization fares every time another Heartbleed or big bash bug hits the wires. Do you panic, scan and one-off patch? Or do you dig into your asset and configuration databases, patch what you know, then look for the unknowns in a methodical manner? If you do the former, I suggest you look at change, soon. If you do the latter, I congratulate you because you’ve succeeded in understanding the underpinnings of sound security – sound ITIL and IT operations strategies.

Do you have a differing opinion? Or just want to share your anecdote or story? I encourage you to leave a comment here or send me a message on Twitter. I believe I have a wide view into the enterprise space, but, admittedly, I don’t have a completely representative sample of the entire security market. I invite you to join the discussion and share your experiences. I look forward to hearing about them.

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

Attack detection firm Vectra AI has appointed Jeff Reed to the newly created role of Chief Product Officer.

More People On The Move

Expert Insights

Related Content

Identity & Access

Zero trust is not a replacement for identity and access management (IAM), but is the extension of IAM principles from people to everyone and...

Malware & Threats

The NSA and FBI warn that a Chinese state-sponsored APT called BlackTech is hacking into network edge devices and using firmware implants to silently...

Cybersecurity Funding

Network security provider Corsa Security last week announced that it has raised $10 million from Roadmap Capital. To date, the company has raised $50...

Network Security

Attack surface management is nothing short of a complete methodology for providing effective cybersecurity. It doesn’t seek to protect everything, but concentrates on areas...

Application Security

Virtualization technology giant VMware on Tuesday shipped urgent updates to fix a trio of security problems in multiple software products, including a virtual machine...

Application Security

Fortinet on Monday issued an emergency patch to cover a severe vulnerability in its FortiOS SSL-VPN product, warning that hackers have already exploited the...

Network Security

A zero-day vulnerability named HTTP/2 Rapid Reset has been exploited to launch some of the largest DDoS attacks in history.

Identity & Access

Hackers rarely hack in anymore. They log in using stolen, weak, default, or otherwise compromised credentials. That’s why it’s so critical to break the...