Vulnerabilities

Lessons Learned From High Impact Vulnerabilities of 2014

Image of Heartbleed Vulnerability

<p class="MsoNormal" style="margin-bottom: 0.0001pt; text-align: left;"><img src="/sites/default/files/Heartbleed-Vulnerability-OpenSSL-Information.jpg" alt="Image of Heartbleed Vulnerability" title="Details of Heartbleed Vulnerability" /></p>

It appears that 2014 will be remembered in the IT industry for several severe and wide-reaching server-side vulnerabilities. In April, a serious flaw (CVE-2014-0160) in the widely-used OpenSSL encryption software that protects website traffic shook the industry (a.k.a. Heartbleed), leaving hundreds of thousands of systems open to attacks from cybercriminals. More than six months later, thousands of websites and devices still remain vulnerable.

In September, multiple critical vulnerabilities (CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, CVE-2014-7187, CVE-2014-6277 and CVE 2014-6278) were reported in the GNU Bourne-Again Shell (Bash), the common command-line shell used in many Linux / UNIX operating systems and Apple’s Mac OS X. The flaws could allow an attacker to remotely execute shell commands by attaching malicious code in environment variables used by the operating system. Similar to Heartbleed, these flaws affect a broad range of systems, including but not limited to Apache servers, web servers running CGI scripts, and embedded systems which span everything from control systems to medical devices. Security experts have warned that the impact of the Bash bug is even bigger than Heartbleed, since the footprint of the GNU Bourne-Again Shell surpasses that of OpenSSL.

Then in mid-October, three Google researchers discovered a flaw (CVE-2014-3566), they called POODLE, that allows cyber criminals to exploit the design of SSL 3.0 to decrypt sensitive information, including secret session cookies, resulting in the potential takeover of users’ accounts. The impact of POODLE is considered by many security experts to be less severe, because many organizations have abandoned SSL 3.0 since it is considered insecure.

While less impactful than the other two security vulnerabilities, POODLE is just another example of widely deployed open source and third-party libraries that have the potential to place software applications and systems at risk. For years, the software vendor community and internal application developers have relied on open source and third-party libraries to achieve faster time-to-market. Often, security testing for these third party libraries was inherently “assumed” to have been conducted by the providers and only random tests were performed as part of the product life cycle process.

So what lessons can we learn from these vulnerabilities?

 1. Increase Granularity of Vendor Risk Management Assessments

Performing a standardized vendor risk management process as part of normal business operations is an important step in securing the supply chain and minimizing risk exposure as it relates to software vulnerabilities introduced via open source or third-party libraries. As a result, organizations should increase the granularity of their existing risk assessments programs. This goes beyond simply focusing on a vendor’s security controls, and demands more granular assessments of a supplier’s methods of vulnerability monitoring. It can go as far as requiring a list of all open source / third-party components used in a vendor’s software applications and devices.

Unfortunately, by extending the scope of the vendor risk assessments, organizations quickly reach limitations as it relates to operational efficiency and scalability. To avoid having to hire legions of contractors or full-time staff, organizations can turn to software to help automate the data gathering process and calculation of risks scores. Specifically, Vendor Risk Management tools are being used by more and more organizations to address the information sharing risk component of overall supply chain risks.

Advertisement. Scroll to continue reading.

2. Increase Frequency of Vulnerability and Pen Testing

Since the code base of applications and device firmware is constantly changing, organizations have to increase the frequency of their vulnerability scans and penetration tests to identify any gaps that can lead to a security exposure. While the majority of organizations have the necessary tools in place, increasing the frequency of these tests often leads to operational inefficiencies and an unnecessary cost burden.

In fact, many organizations have the data required to implement a more streamlined vulnerability management process. However, sifting through all the data sets, normalizing and de-duplicating the information, filtering out false positives, aggregating it, and finally deriving business impact-driven analysis is a slow and labor-intensive process. This explains why, according to the 2014 Mandiant Threat Report, 67 percent of breaches in 2013 were discovered by third parties rather than by internal resources.

The emergence of Big Data Risk Management systems is taking vulnerability management to the next level. Using volumes of data gathered and correlated from security operations and IT tools, they can derive a close to real time view of an organization’s threat and vulnerability posture without hiring additional staff.

3. Enforce Vulnerability Testing as Part of Vendor Onboarding Process

Based on the increased risk posed by vulnerabilities in third-party technology, organizations are also starting to turn the table on their suppliers. Instead of using their own security operations teams to assess potential vulnerabilities, some companies are mandating that suppliers use independent verification services (e.g., Veracode’s VAST program) to test software applications prior to procurement and deployment.

This approach has been adopted by a variety of Fortune 1000 companies and has led to a shift in the software product life cycle management process. More and more vendors nowadays are adjusting their engineering methodologies to encompass vulnerability testing as part of the product coding rather than as part of the quality assurance process.

4. Contextualize Threat Findings and Automate Mitigation Actions

According to Kaspersky Lab critical vulnerabilities can remain unpatched for months after they’ve been discovered and publicly disclosed. Based on their research, the average company takes 60 – 70 days to fix a vulnerability, which represents plenty of time for attackers to gain access to a corporate network. In many cases, vulnerabilities were still present a full year after being discovered, which exposes the organization to even unsophisticated attacks.

Considering the fact that even mid-sized organizations must remediate thousands of vulnerabilities per month, it is not surprising that it takes so long for application security teams to validate and patch flaws. As a result, many organizations are relying on multiple tools to produce the necessary vulnerability assessment data. This only adds to the volume, velocity, and complexity of data feeds that must be analyzed, normalized, and prioritized. Relying on human labor to comb through mountains of data logs is one of the main reasons that critical vulnerabilities are not being addressed in a timely fashion.

Here again, automated systems can be used for continuous diagnostics and ticketing to remediate only business critical risks.

To mitigate the risks associated with third-party server-side vulnerabilities, organizations should consider making the transition from alert-based to analytics-enabled security operations processes.

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version