There is simply no denying that the adoption of open source components in software development is pervasive and will continue to expand. Black Duck, an open source security firm, noted in their “State of Open Source Security in Commercial Applications, 2016” report that 67 percent of the applications tested had some form of open source component. I have seen analyst reports claim adoption is very close to 100 percent for mission critical applications for the Global 2000.
This makes perfect sense given that using open source complements and amplifies many of the characteristics that development teams covet. When you tap into benefits like agility, time to market, and reduction of development costs, you are bound to have a pretty impressive adoption curve. Companies also like the freedom from vendor lock-in, and see open source as a way to gain a competitive edge.
Perfect sense also prompts the next question: as software increasingly becomes the point of attack, how does the use of open source components affect the security of the applications where these components are used?
The answer is: tons. The National Vulnerability Database (NVD) says that since 2014, more than 6,000 new security vulnerabilities have been reported publicly in open source. The Black Duck study says that there was an average of 22.5 vulnerabilities identified in the applications that included open source components, and that 40 percent of those vulnerabilities were rated “severe.”
Think about those statistics for a moment. On average, if you use open source components, you will have nine severe vulnerabilities in your application. But these are not just any vulnerabilities—they are vulnerabilities that are published and therefore widely known. This means that a simple Google search will provide you a neatly written tutorial or YouTube video on how to exploit that vulnerability.
If the part of your application composed of code written in-house was completely pristine—you started with secure design and architecture disciplines, executed a perfect threat modeling exercise, and your developers all wrote vulnerability-free code—chances are high that you’ll still end up with vulnerabilities from the open source components.
The Black Duck study indicates that organizational policy and structure have not yet caught up with the growing adoption. Nearly 50 percent of organizations using open source have no policies for selecting and approving open source code. Of the companies that do have policies, 50 percent were found to lack enforcement or have policies that are readily ignored. Only one in three companies have a person dedicated to open source projects. As is often the case, rigor and discipline are not keeping pace with adoption.
But wait, there is more. How do you manage the licensing and version control of the open source components? How do you keep track of what components are used, with what version, in your application so you can take action if a vulnerability is reported? The Black Duck study notes that for the applications that use open source, there was an average of 105 open source components. Just knowing what your organization is using is a non-trivial exercise.
It turns out that all of this progress has generated a decidedly old school set of processes. Those organizations that do track open source use tend to turn to that decidedly flawed instrument known as the Excel spreadsheet. There is not yet widespread use of tools that scan and identify open source components, thus the reliance on spreadsheets. Given the paucity of policies and enforcement of policies, relying on developers to log open source use and provide accurate version levels is predictably unreliable. If a vulnerability is discovered in open source, there is no effective way to know the impact on the organization, the scope of the problem, or the scope of the remediation effort.
Remediation. Yet another issue with open source. If a vulnerability is found, who is responsible for remediating the problem? Again, we established that most organizations do not have someone dedicated to managing open source. So who is responsible, IT security or the developers? Does the company enable the developers to work with the community that supports the open source component in question?
So, we have established the benefits of open source along with the challenges. Clearly organizations must recognize that they need to expand their application security testing regimens to include open source components. They must set policies on the use of open source and enforce those policies. Just as application security works when there is a Software Security Group (SSG) responsible for ensuring that applications are tested and remediated, someone in the organization must own open source usage. The organization must take steps to identify and inventory open source usage through an automated tool that supports the organization’s policies and provides licensing data for accounting and legal purposes.
It is not like this is the first time a good idea spurred adoption that got ahead of security considerations or management policies. Organizations should understand the risks and returns of open source and either start putting policies in place or getting serious about enforcing existing policies. Developers need to be educated on the ramifications of using open source and how it is to be managed. Analysis of open source and its risks should be integrated into your secure development life cycle (SDLC) methodology and your Software Security Initiative (SSI).
There are just too many good reasons to adopt open source, in spite of the associated challenges. In truth, the challenges are nothing a little rigor and discipline can’t fix. You just need to be aware that they exist and take the proper steps to mitigate them. The parade has formed and it is up to us to get in front of it and lead. The stakes are too high to do otherwise.