Connect with us

Hi, what are you looking for?



Lessons Learned From High Impact Vulnerabilities of 2014

Image of Heartbleed Vulnerability

Image of Heartbleed Vulnerability

It appears that 2014 will be remembered in the IT industry for several severe and wide-reaching server-side vulnerabilities. In April, a serious flaw (CVE-2014-0160) in the widely-used OpenSSL encryption software that protects website traffic shook the industry (a.k.a. Heartbleed), leaving hundreds of thousands of systems open to attacks from cybercriminals. More than six months later, thousands of websites and devices still remain vulnerable.

In September, multiple critical vulnerabilities (CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, CVE-2014-7187, CVE-2014-6277 and CVE 2014-6278) were reported in the GNU Bourne-Again Shell (Bash), the common command-line shell used in many Linux / UNIX operating systems and Apple’s Mac OS X. The flaws could allow an attacker to remotely execute shell commands by attaching malicious code in environment variables used by the operating system. Similar to Heartbleed, these flaws affect a broad range of systems, including but not limited to Apache servers, web servers running CGI scripts, and embedded systems which span everything from control systems to medical devices. Security experts have warned that the impact of the Bash bug is even bigger than Heartbleed, since the footprint of the GNU Bourne-Again Shell surpasses that of OpenSSL.

Then in mid-October, three Google researchers discovered a flaw (CVE-2014-3566), they called POODLE, that allows cyber criminals to exploit the design of SSL 3.0 to decrypt sensitive information, including secret session cookies, resulting in the potential takeover of users’ accounts. The impact of POODLE is considered by many security experts to be less severe, because many organizations have abandoned SSL 3.0 since it is considered insecure.

While less impactful than the other two security vulnerabilities, POODLE is just another example of widely deployed open source and third-party libraries that have the potential to place software applications and systems at risk. For years, the software vendor community and internal application developers have relied on open source and third-party libraries to achieve faster time-to-market. Often, security testing for these third party libraries was inherently “assumed” to have been conducted by the providers and only random tests were performed as part of the product life cycle process.

So what lessons can we learn from these vulnerabilities?

 1. Increase Granularity of Vendor Risk Management Assessments

Performing a standardized vendor risk management process as part of normal business operations is an important step in securing the supply chain and minimizing risk exposure as it relates to software vulnerabilities introduced via open source or third-party libraries. As a result, organizations should increase the granularity of their existing risk assessments programs. This goes beyond simply focusing on a vendor’s security controls, and demands more granular assessments of a supplier’s methods of vulnerability monitoring. It can go as far as requiring a list of all open source / third-party components used in a vendor’s software applications and devices.

Advertisement. Scroll to continue reading.

Unfortunately, by extending the scope of the vendor risk assessments, organizations quickly reach limitations as it relates to operational efficiency and scalability. To avoid having to hire legions of contractors or full-time staff, organizations can turn to software to help automate the data gathering process and calculation of risks scores. Specifically, Vendor Risk Management tools are being used by more and more organizations to address the information sharing risk component of overall supply chain risks.

2. Increase Frequency of Vulnerability and Pen Testing

Since the code base of applications and device firmware is constantly changing, organizations have to increase the frequency of their vulnerability scans and penetration tests to identify any gaps that can lead to a security exposure. While the majority of organizations have the necessary tools in place, increasing the frequency of these tests often leads to operational inefficiencies and an unnecessary cost burden.

In fact, many organizations have the data required to implement a more streamlined vulnerability management process. However, sifting through all the data sets, normalizing and de-duplicating the information, filtering out false positives, aggregating it, and finally deriving business impact-driven analysis is a slow and labor-intensive process. This explains why, according to the 2014 Mandiant Threat Report, 67 percent of breaches in 2013 were discovered by third parties rather than by internal resources.

The emergence of Big Data Risk Management systems is taking vulnerability management to the next level. Using volumes of data gathered and correlated from security operations and IT tools, they can derive a close to real time view of an organization’s threat and vulnerability posture without hiring additional staff.

3. Enforce Vulnerability Testing as Part of Vendor Onboarding Process

Based on the increased risk posed by vulnerabilities in third-party technology, organizations are also starting to turn the table on their suppliers. Instead of using their own security operations teams to assess potential vulnerabilities, some companies are mandating that suppliers use independent verification services (e.g., Veracode’s VAST program) to test software applications prior to procurement and deployment.

This approach has been adopted by a variety of Fortune 1000 companies and has led to a shift in the software product life cycle management process. More and more vendors nowadays are adjusting their engineering methodologies to encompass vulnerability testing as part of the product coding rather than as part of the quality assurance process.

4. Contextualize Threat Findings and Automate Mitigation Actions

According to Kaspersky Lab critical vulnerabilities can remain unpatched for months after they’ve been discovered and publicly disclosed. Based on their research, the average company takes 60 – 70 days to fix a vulnerability, which represents plenty of time for attackers to gain access to a corporate network. In many cases, vulnerabilities were still present a full year after being discovered, which exposes the organization to even unsophisticated attacks.

Considering the fact that even mid-sized organizations must remediate thousands of vulnerabilities per month, it is not surprising that it takes so long for application security teams to validate and patch flaws. As a result, many organizations are relying on multiple tools to produce the necessary vulnerability assessment data. This only adds to the volume, velocity, and complexity of data feeds that must be analyzed, normalized, and prioritized. Relying on human labor to comb through mountains of data logs is one of the main reasons that critical vulnerabilities are not being addressed in a timely fashion.

Here again, automated systems can be used for continuous diagnostics and ticketing to remediate only business critical risks.

To mitigate the risks associated with third-party server-side vulnerabilities, organizations should consider making the transition from alert-based to analytics-enabled security operations processes.

Written By

Torsten George is a cybersecurity evangelist at Absolute Software, which helps organizations establish resilient security controls on endpoints. He also serves as strategic advisory board member at vulnerability risk management software vendor, NopSec. He is an internationally recognized IT security expert, author, and speaker. Torsten has been part of the global IT security community for more than 27 years and regularly provides commentary and publishes articles on data breaches, insider threats, compliance frameworks, and IT security best practices. He is also the co-author of the Zero Trust Privilege For Dummies book. Torsten has held executive level positions with Centrify, RiskSense, RiskVision (acquired by Resolver, Inc.), ActivIdentity (acquired by HID® Global, an ASSA ABLOY™ Group brand), Digital Link, and Everdream Corporation (acquired by Dell).

Click to comment

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

SecurityWeek’s Threat Detection and Incident Response Summit brings together security practitioners from around the world to share war stories on breaches, APT attacks and threat intelligence.


Securityweek’s CISO Forum will address issues and challenges that are top of mind for today’s security leaders and what the future looks like as chief defenders of the enterprise.


Expert Insights

Related Content


Less than a week after announcing that it would suspended service indefinitely due to a conflict with an (at the time) unnamed security researcher...

Data Breaches

OpenAI has confirmed a ChatGPT data breach on the same day a security firm reported seeing the use of a component affected by an...

Risk Management

The supply chain threat is directly linked to attack surface management, but the supply chain must be known and understood before it can be...


The latest Chrome update brings patches for eight vulnerabilities, including seven reported by external researchers.


Patch Tuesday: Microsoft warns vulnerability (CVE-2023-23397) could lead to exploitation before an email is viewed in the Preview Pane.


Apple has released updates for macOS, iOS and Safari and they all include a WebKit patch for a zero-day vulnerability tracked as CVE-2023-23529.

IoT Security

A group of seven security researchers have discovered numerous vulnerabilities in vehicles from 16 car makers, including bugs that allowed them to control car...

IoT Security

A vulnerability affecting Dahua cameras and video recorders can be exploited by threat actors to modify a device’s system time.