GCHQ Joins the NSA in Publishing its Vulnerabilities Equities Process
On November 15, 2017, the U.S. government made public its vulnerability equities process (VEP). This is the process used to decide whether a government agency should disclose a discovered vulnerability or keep it secret for its own purposes. Exactly one year and two weeks later, the UK government did similar, disclosing its own Equities Process.
The issue at stake is summarized by Dr Ian Levy, technical director at the UK’s National Cyber Security Centre (NCSC), which is part of the Government Communications Headquarters (GCHQ) intelligence agency. “When we find a security problem, we need to decide what to do. Our default is to tell the vendor and have them fix it, but sometimes – after weighing up the implications – we decide to keep the fact of the vulnerability secret and develop intelligence capabilities with it.”
Both governments admit to stockpiling vulnerabilities. This is not open to discussion – they just do. The equities process is the means by which they decide which vulnerabilities should be kept secret from vendors, security companies and the public.
Neither government admits – within this process – that they also develop and stockpile exploits for those vulnerabilities. However, the Shadow Brokers release of the NSA EternalBlue exploit makes it self-evident that at least the NSA develops exploits; while GCHQ’s ‘developing intelligence capabilities’ would require the ability to exploit the stockpiled vulnerabilities.
The Two Processes
The principles underlying the U.S. and UK processes are almost identical. For example, in both cases the decision process involves only government agencies: it is effectively an arbitrary intelligence agency decision on whether to retain or disclose a vulnerability.
The U.S. process involves multiple government agencies. The UK process involves only two specified agencies: GCHQ and NCSC.
The U.S. process produces an annual report to the Cybersecurity Coordinator (at that time Rob Joyce, but a position that has since been eliminated by the Trump Administration). This report ‘may’ be provided to Congress. There is no public requirement for reports from GCHQ.
The similarities are not surprising since GCHQ and the NSA have a long-standing information sharing arrangement. Each process exempts vulnerabilities that have been shared by the other – these won’t even be considered for disclosure.
There is, however, one major difference between the two processes. In the U.S., it is the NSA (the foreign cyber intelligence agency) that holds the key role by providing the Executive Secretariat for the VEP. In the UK it is the NCSC (the domestic cyber intelligence agency) that holds the key role – the CEO of the NCSC (currently Ciaran Martin) is the final arbiter on the decision to disclose or retain.
The UK approach would be like giving the FBI the key role in the U.S. It may indicate a subtle difference in approach. In the U.S., value for foreign cyber purposes is key, while in the UK, lack of harm to the domestic market is key.
However, there is scope for tension here since the NCSC remains part of GCHQ, and Ciaran Martin is also a director at GCHQ. It is noticeable that a recent UK government committee report commented, “we heard there are unresolved tensions derived from (NCSC’s) status as part of GCHQ…”
This could explain the inclusion within the UK equities process of the statement, “In exceptional cases, the CEO of the NCSC may decide that further escalation via submissions to Director GCHQ and, if required, the Foreign Secretary should be invoked.”
To hold or not to hold
For security vendors, private industry and the general public, the question is not so much how intelligence agencies decide to keep vulnerabilities secret, but whether they should do so at all.
The first point to stress is that there is no law compelling governments to disclose discovered vulnerabilities. These agencies, in general, currently have the legal right to stockpile vulnerabilities – just as business has the legal right to decline to patch vulnerabilities. The debate really focuses on whether stockpiling should be subject to legal restriction, and the moral issue of whether a government is right to allow its citizens to remain unknowingly vulnerable.
The need for some form of legal oversight is a common theme among independent security professionals. Microsoft has long argued for the development of international norms of cyber behavior. In February 2017, Brad Smith (Microsoft’s chief legal officer) focused these arguments into a call for an international Cyber Geneva Convention. Key, however, is that “it should mandate that governments report vulnerabilities to vendors rather than stockpile, sell or exploit them.”
This clearly isn’t happening. It is unlikely to happen. The history of nuclear disarmament suggests that the best that will happen will be bilateral agreements and perhaps international non-proliferation agreements. And neither of these have worked, with North Korea a recent nation to develop nuclear weapons, and President Trump indicating he will withdraw from the bilateral Intermediate-Range Nuclear Forces (INF) Treaty with Russia.
With nothing to stop antagonists from stockpiling cyber weapons, western agencies feel they need to do similar (although they describe the exploits as intelligence tools rather than cyber weapons).
This de facto ‘they do, so we must’ attitude is supported by Tom Kellermann, Chief Cybersecurity Officer at Carbon Black. “Governments will withhold zero days as the future of armed conflict will be cyber-enabled,” he told SecurityWeek. “The asymmetric capability of zero-days is fundamental for Force projection. International norms are critically important, but they can only be established when our cold war adversaries stop colonizing western cyberspace and dismantle the protector racket state they have incubated around their cybercriminal communities.”
Chris Morales, head of security analytics at Vectra, takes a more nuanced view. He questions the focus on exploits as a means to prepare for and conduct potential cyber warfare.
“Keep in mind the exploit is simply one of many ways to gain access to a network or system,” Morales said. “While these types of exploits are heavily leveraged for automated attacks like ransomware, the most successful method of exploitation is still social engineering. Zero days are saved for the most critical needs. Most attackers don’t like to waste this type of knowledge when they can simply convince a user to give them access to their system instead.”
This raises an interesting issue. Exploits are commonly used by cybercriminals, and cannot be contained. The escape of Stuxnet from Iran (and the possible conversion of it into Shamoon used against Saudi Arabia) is one example. The theft of the NSA’s EternalBlue exploit and its leaking by ShadowBrokers (and its subsequent use in the WannaCry and NotPetya ransomware outbreaks that cost industry hundreds of millions of dollars) is another. Given the empirical inability of governments to guarantee the safe-keeping of their exploits, should they cease to stockpile them and focus on social engineering methods of infiltration?
Against this argument is the future potential for the development of advanced attacks that cannot be detected and cannot escape. IBM recently described such an approach that uses artificial intelligence to deliver malware under only the most specific requirements – malware that will only ever be activated against a particular and specific target. It would be a reasonable assumption that both the NSA and GCHQ already have such tools.
Having a zero-day vulnerability to deliver such a cyber weapon would be simpler than relying on social engineering.
Nathan Wenzler, senior director of cybersecurity at Moss Adams, questions whether government claims that stockpiling is necessary for national security are valid.
“While there is a case to be made for the national security work that many governments are tasked to perform on behalf of their citizens,” Wenzler says, “raising the overall security posture and defense strength of the applications and services that billions of people use on a daily basis by informing companies of these flaws and having them fix these issues would help ensure the safety and security of those same citizens in a much more comprehensive way.”
The view of Ed Williams, director EMEA, at Trustwave’s SpiderLabs, suggests that perhaps there is too much concern over whether governments stockpile vulnerabilities. “From a purely cyber perspective, a zero-day should not compromise an entire organization,” he said. “Sufficient defense in depth should be employed to withstand an attack and sufficient protective monitoring should be in place to detect such attacks and compromises; sadly, we know this isn’t the reality. We look at WannaCry as an example, this attack marauded around the Internet two months after the original patch was released and organizations that were compromised had very little defense in depth.”
The implication is clear. Business already has to defend its systems against zero-day vulnerabilities and zero-day exploits, and this must include government zero-days. Whether governments disclose or reveal vulnerabilities doesn’t change this reality.
As it stands, neither private industry nor the general public have any direct input into whether the intelligence agencies stockpile vulnerabilities (and cyber weapons), nor the process by which a decision to keep or disclose them is operated. Both the U.S. and UK governments have now published their formal equities processes. This limited transparency is welcomed, but neither process has any realistically independent oversight.
Sean Sullivan, security advisor at F-Secure (a company focused on detecting and stopping exploits and malware), doesn’t believe that simply publishing the decision process goes far enough. “I think there should be transparency on the number of disclosures made and the number of secrets kept,” he told SecurityWeek. “The public doesn’t need to know the details – but understanding the bigger picture (such as the percentage of vulnerabilities kept) would help maintain the default to disclose.” But even here, he adds, “Fixing it is ultimately better than using it.”
In his blog on the UK equities process, the NCSC’s Levy states, “We’ve also asked the Investigatory Powers Commissioner’s Office (IPCO), who oversees the use of statutory powers by GCHQ, to provide oversight of the process we run to make sure we’re running the process properly. We think that provides world class assurance around this bit of our work.”
However, the Investigatory Powers Act (which establishes the IPCO) states it must not “jeopardise the success of an intelligence or security operation or a law enforcement operation”, which effectively eliminates any active involvement in the equities process.
As things stand in both the U.S. and UK, the intelligence agencies stockpile and use vulnerabilities at their own discretion. This is unlikely to change. The processes by which a decision on whether to retain or disclose a vulnerability have been published. We cannot change this, but maybe we need to talk about it.
“We need to develop treaties and laws that will govern civilized behavior: what’s OK and what’s not OK,” comments Sam Curry, CSO at Cybereason. “And when we’ve done that we can judge if people are doing things that are too dangerous or illegal. In the meantime, we should at least discuss the life cycle of these weapons in public debate, which is our right and perhaps even our duty in a pluralist, open democracy. How long should these be kept for? How should they be disclosed eventually? How and when should they be employed? These are all still in the murky regions that haven’t been disclosed.”
He adds, “Kudos to the NSCS for saying what’s happening, because that’s good for democracy – so let’s now talk about what we should do with these new weapons.”