Imagine, for a moment, that an engineer in a government acquisitions program office is given the responsibility for designing the next generation of cruisers for the US Navy. Specifications for the hull, the machinery, the weapons, the navigation system, the damage control systems and the radars are carefully drawn up. His next focus is the ship’s propulsion system. As requirements are drawn for the engines, the engineer is informed that shipboard propulsion for the entire Navy is the province of an entirely separate organization, and that this organization will provide, at some indeterminate date in the future, a propulsion solution, the use of which is mandatory.
While it’s likely that our engineer would employ some colorful language at this point, the upshot would be something to the effect of “That’s crazy!” The sanity of this separation of duties is open for debate. What is clear, however, is that this is exactly the sort of policy that governments and international organizations implement when it comes to cybersecurity and specific acquisitions programs.
The imposition of externally-defined cybersecurity methodologies and solutions on both government and critical infrastructure programs hasn’t proven effective. Fortunately, the political and technical winds are shifting, and there is new emphasis on the integration of security requirements and functionality from the beginning of the technology development life cycle.
To be fair, there is substantial guidance available to government departments, agencies and acquisitions programs with respect to information assurance (IA), cybersecurity, information security and the protection of classified information (“substantial” doesn’t quite do it justice though, truckloads or boatloads are better starting points for an accurate quantitative conception!). Unfortunately, the volume of guidance isn’t having a proportionally positive impact. Programmatic frictions arise when critical functional elements of an emerging or upgraded capability are defined and/or dictated by an external entity that, for all intents and purposes, is not a stakeholder with respect to the capability’s intended use.
The politics of program management and security engineering aside, the imposition of externally defined security methodologies on research, development and acquisitions programs is a double-edged sword. On one hand, the sheer number of cybersecurity programs intended to provide broad spectrum protection across large sections of the government enterprise (e.g., defense, intelligence, civilian government, etc.) is a positive sign. The allocation of resources means that cyber and information security issues are drawing the attention they merit and that tangible efforts are being made to address problems long seen as intractable.
On the other hand, externally imposed security requirements have traditionally led to either applique security solutions and/or programmatic decisions that realize cost savings by deferring required security capabilities to the next technical echelon. For example, a program manager charged with producing a software application that correlates targets to available strike assets might elect to use the identity, credential and access management (ICAM) mechanisms provided by the software’s host system, the host system’s network or the enterprise that owns the network. This shifts the resourcing and development burdens away from the program manager.
This approach is fraught with assumptions that simply don’t bear out. Application development is rarely performed in concert with host system or network service development. This results in lengthy and costly integration efforts. It also abrogates the core security principle of defense in depth. If the enterprise, network or host is breached, the application is more vulnerable to attack than it would be had security engineering been part of the requirements process. Finally, applying security as an add-on or applique to a deployed system flies in the face of established software project management and systems engineering principles. More than 30 years ago, Barry Boehm demonstrated conclusively that the cost to fix an error after a system has been operationally deployed can range from 40 to 1,000 times the cost to fix an error during the requirements phase.
Fortunately, the shortcomings of an enterprise technology approach emphasizing discrete operational and cybersecurity development efforts are becoming evident. More importantly, programmatic efforts to ensure that security is both engineered as an integral system capability and emphasized as a key operational functionality are gaining traction. In late September, the NATO Deputy Assistant Secretary General for Emerging Security Challenges, Jamie Shea, addressed the issue, stating that NATO needs to embed cyber defense as a permanent feature of exercise and mission planning. Shea is also NATO’s senior cybersecurity official.
In a related vein, the US Department of Defense (DoD) significantly overhauled its IA practices and posture earlier this year. With the issuance of Department of Defense Instruction 8500.01 on March 14th, 2014, the preceding IA process, known as the Department of Defense Information Assurance Certification and Accreditation Process (DIACAP) was replaced by a process incorporating risk management practices developed by the US National Institute of Standards and Technology (NIST). The NIST practices explicitly state that security is to be integrated into the overall enterprise architecture and the system development life cycle.
This is a significant departure from, and improvement over, security paradigms focused on compliance rather than integration. The DoD’s new process supports a system development paradigm where the identification, architecture, design, implementation and integration of non-functional security requirements are accorded equal priority to functional/operational requirements. What it does not do though, is provide specific guidance about how individual programs should be organized for security success.
What appears to be a lack of specificity is deliberate. Each system, program and acquisition is different, and leadership must have the ability to, within limits, tailor the organization and its processes. Despite these differences, there are a number of practices common to all system development efforts (both private and public sector), the adoption of which will significantly advance the seamless integration of security capabilities into system and enterprise architectures. Some practices include:
End the organizational segregation of cyber professionals
Current practice generally segregates a program’s technical expertise into two groups: a generalist team responsible for specifying and implementing a system and a specialized security or cyber team charged with ensuring that the delivered system is adequately protected. This type of organization not only inhibits information flow throughout the system development life cycle, but often pits the two groups, which should collaborate to produce an effective solution, against one another with respect to requirements and resources.
Integrated, unified teams dramatically increase the volume of information exchanged at early stages, thus realizing the technical benefits and cost savings (a la Boehm) that flow from addressing issues as early in the project life cycle as possible.
Mandate the DevOps principles of continuous test and integration
Security vulnerabilities often result from improperly coded or configured software. In theory, these errors should be caught during a project’s test phase. In practice, many errors are not discovered during testing and fixes of known problems may be knowingly deferred due to schedule and budget concerns.
Part of the answer lies in adopting continuous test and continuous integration approaches as laid out in DevOps methodology. Using these approaches, software is both automatically and constantly tested and integrated. Upon completing a module, a developer attempts to check it into the project trunk. As part of this process, the module’s compliance with both functional and non-functional requirements (including security) is verified. Any failures result in rejection with notification and detailed reports as to which tests failed. The developer then addresses the issues. If there are no failures upon submission, the module is integrated into the project trunk.
While continuous testing and continuous integration aren’t silver bullets, they will ensure that a far larger proportion of vulnerabilities resulting from software errors will be found before product deployment.
Bifurcate then integrate separate cyber programs into functional programs
Today, there are any number of discrete programs intended to provide an overarching cyber defense capability for government, critical infrastructure and private industry. They range in magnitude from a few tens of thousands of dollars to ceilings calculated in the billions. The problem is that the programs, resources and organizations these efforts are intended to protect are subject to rather than part of the security solution. Until that paradigm changes, there is a significant likelihood that the solutions realized by these cyber programs will either be inadequate for or simply not applied by the operational programs, organizations and resources.
One answer is to bifurcate cyber funding into research and operational tracks. Research efforts would focus on developing new technologies and techniques that could be used by architects, engineers and developers to build enhanced security into operational products and programs. The operational track would use cyber funding to mandate the addition of dedicated security engineers and capabilities to operations, maintenance and development programs, with a concomitant requirement to demonstrate appropriate levels of compliance.
As long as security is perceived and implemented as a discrete discipline, it will be perceived as an imposition and an afterthought. Until security is built in from the beginning, and until participants across the scope of the lifecycle – from engineer to executive – understand that it is as much a part of their area of responsibility as the operational functions, achieving the levels of security and resiliency to operate in an increasingly hostile cyber environment will remain a Sisyphean task.