Many Organizations Have the Data Required to Implement a More Streamlined Vulnerability Management Process
Relying solely on the knowledge of existing vulnerabilities, provided by vulnerability scanners, is only the first step in a streamlined vulnerability management process. Without putting vulnerabilities into the context of the risk associated with them, organizations often misalign their remediation resources. This is not only a waste of money, but more importantly creates a longer window of opportunity for hackers to exploit critical vulnerabilities. At the end of the day, the ultimate goal is to shorten the window attackers have to exploit a software flaw.
Ever since computer software established itself as the backbone of modern commerce, communications, and entertainment, it has been a target for “hacktivists”, organized cyber criminals, rogue nation states, and terrorist organizations. Their primary attack vector is exploiting design flaws and weaknesses in applications in order to steal data, commit fraud, and disclose sensitive information.
In the early days of computing, application security was manageable as scale played a limited role. Organizations had reasonable amounts of vulnerabilities to deal with. Since then, things have changed dramatically. We now live in a world where the average organization uses dozens and dozens of software applications. Many companies, even if they’re not in the software business, create web and mobile applications. Even though the number of zero-day attacks has recently risen, cyber villains still predominantly take advantage of known vulnerabilities to carry out their attacks. This has created a constant struggle to mitigate both known vulnerabilities and discover and patch the unknown “zero-day” vulnerabilities.
According to Kaspersky Lab critical vulnerabilities can remain unpatched in businesses for months after they’ve been discovered and publicly announced. Based on their research, the average company takes 60 – 70 days to fix a vulnerability, which represents plenty of time for attackers to gain access to a corporate network. In many cases, vulnerabilities were still present a full year after being discovered, which exposes the organization to even unsophisticated attacks.
Considering the fact that even mid-sized organizations must remediate thousands of vulnerabilities per month, it is not surprising that it takes so long for application security teams to validate and patch flaws. To ensure application security, the majority of organizations are relying on multiple tools to produce the necessary vulnerability assessment data. This only adds to the volume, velocity, and complexity of data feeds that must be analyzed, normalized, and prioritized. Relying on human labor to comb through mountains of data logs is one of the main reasons that critical vulnerabilities are not being addressed in a timely fashion.
According to Verizon’s “2013 Data Breach Investigations Report”, 70% of breaches were discovered by a third-party and not through internal resources. This raises the question, how can organizations bring vulnerability management under control?
The first step is to transition from a vulnerability assessment to vulnerability management approach. Vulnerability management goes beyond scanning for vulnerabilities and encompasses big data analysis and remediation workflows.
To avoid operational inefficiencies and streamline the remediation process, vulnerability management needs to be supplemented by a holistic, risk-based approach to security, which considers factors such as threats, reachability, your organization’s compliance posture, and business impact.
Without a threat, the vulnerability cannot be exploited.
Another limitation is reachability – if the threat cannot reach the vulnerability, the associated risk is either reduced or eliminated.
In this context, an organization’s compliance posture plays an essential role, as compensating controls can be leveraged to impede the reachability of a threat. According to the Verizon Data Breach Investigations Report, a majority of reported incidents were avoidable through simple or intermediate controls. This illustrates the importance of compensating controls in the context of cyber security.
Another factor in determining the actual risk posed by a vulnerability is business impact. Vulnerabilities that threaten critical business assets represent a far higher risk than those that are associated with less critical assets.
On paper a risk-based approach to security sounds straightforward. Unfortunately, the data required for each of the decision factors that make up a risk-based approach to security is scattered and disconnected. That’s because it is being produced by a variety of silo-based tools such as vulnerability scanners, penetration testing tools, IT-GRC systems, configuration management databases, etc.
Many organizations have the data required to implement a more streamlined vulnerability management process. However, sifting through all the data sets, normalizing and de-duplicating the information, filtering out false positives, aggregating it, and finally deriving business impact-driven remediation actions is a slow and labor-intensive process.
The emergence of big data risk management systems is taking vulnerability management to the next level. They combine risk intelligence, using big data that is gathered and correlated from security operations tools, with automated remediation that establishes bi-directional workflows with IT operations. These systems drive operational efficiencies by automating continuous diagnostics and ticketing to remediate only business critical risks. Using this automated approach, organizations can free up IT and security personnel to focus on critical tasks and turn their security technicians into risk strategists.
In today’s fast moving threat environment, vulnerability management, when deployed as a stand-alone discipline that does not apply risk-based metrics for ranking and prioritizing remediation efforts may be making organizations less, not more secure.