On August 15th, 2014, the Washington Post published an article by Barton Gelman which revealed that a modified version of a network defense tool created by CloudShield Technologies was likely used by an unknown number of intelligence agencies outside the United States for offensive cyber and domestic surveillance operations.
The article is a companion piece to a report published by Citizen Lab. Citizen Lab describes itself as “an interdisciplinary laboratory based at the Munk School of Global Affairs, University of Toronto, Canada focusing on advanced research and development at the intersection of Information and Communication Technologies (ICTs), human rights, and global security.” Taken together, the two seem to provide conclusive evidence.
The question, however, is “Conclusive evidence of WHAT?”
• That CloudShield created an effective product that, like all other technologies created, is a dual-use technology?
• That there are governments that engage in covert domestic surveillance to monitor dissident communications and activities?
• That countries without the Gross Domestic Product necessary to support a ten billion dollar annual budget for signals intelligence (SIGINT) activities will look to acquire credible and effective SIGINT capabilities on the cheap?
By all accounts, CloudShield has allocated significant research and resources to develop some of the most effective and sophisticated cyber defense technology available. It is in use today, as the Washington Post article states, by the US Department of Defense to protect its networks which are under continual cyber assault.
There is no evidence to show that the company established a line of business supporting the exploitation of its products for any purpose, much less the benefit of repressive regimes.
There is no shortage of repressive governments in the world today. Freedom House, a Washington, D.C. think tank, estimates that two billion people are living under oppressive rule, and categorizes forty-seven countries as “Not Free.” Among the characteristics shared by these countries is the pervasive use of surveillance mechanisms to monitor and control their populations. These mechanisms span the technical spectrum – from paid human informants to sophisticated software surreptitiously evaluating every communication sent across a targeted network.
Francis Bacon’s famous sixteenth century quote, ipsa scientia potestas est (“for knowledge itself is power”), is no less true now than it was then. In the still young 21st century empires, political and economic fates have risen, shifted and fallen due to the power of knowledge. As a result, nations lavish inordinate amounts of blood and treasure pursuing an information advantage that will enable them to ride, or even steer, the winds of fate. Rich nations create organizations and develop technologies to gain and maintain this advantage. Less wealthy nations capitalize on these research investments to realize their own effective, if not entirely comparable, information acquisition capabilities.
Instead, what Gelman’s article and Citizen Lab’s report conclusively prove is that the tools of security, regardless of the degree of sophistication, remain inherently “dual-use.” That is, they can be used for either protective, defensive purposes or for the offense to mount an attack. Examples are virtually limitless:
• Locks can be used to secure a building and to safeguard valuables against theft, or they can seal the door on a prison cell.
• The same encryption technology that stimulates economic growth and the generation of wealth by enabling secure electronic commerce transactions can also deny anonymity to political dissidents and whistleblowers.
• A firearm can be used defend a home against an intruder or wielded criminally to harm an innocent.
• Legislation and norms that protect an individual’s right to privacy also exert a chilling effect on the network hygiene practices of internet service providers, creating conditions conducive to distributed denial of service attacks.
As Gelman and Citizen Lab point out, a sophisticated IP packet analysis tool designed and intended to protect networks against malicious software attacks can be used to form the basis for an appliance that automates network injection attacks. Such attacks imperceptibly mutate unencrypted internet data streams, allowing the attacker to infect a large number of target systems merely by waiting for users to view a video. To put that in proper perspective, the YouTube video “The Ultimate Fails Compilation” has, as of August 28, 2014, been viewed 135,335,954 times. A network injection attack using that particular video as an attack vector would have created an infection base larger than the population of Japan. There are only nine countries on the planet with larger populations.
As technology becomes more advanced and nuanced, the distinction between mechanisms that buttress defensive security operations and those that enable the offense, supporting attacks and exploits, will become blurred to the point where it is obscure and indiscernible. This begs the question, and a rather fundamental one at that: How does a society dependent on technology take meaningful steps to ensure that it is used for beneficial rather than harmful purposes?
The answer, like the technology in question, is both simple and nuanced: As a society we must substantially improve the ethical education that is mandatory for blossoming technologists and at the same time hold professionals accountable to such standards. The concept isn’t new. Enforceable codes of ethics and conduct are found in professional disciplines, including law, medicine and accounting. Demonstrable failures to adhere to these standards can have significant impact on both individuals and organizations. These codes, and the standards they embody, represent practitioners’ understanding of the tremendous trust placed in them by members of the general public.
When computers were relatively rare standalone entities and information was shared using floppy disks and sneakernet, the level of trust placed in technologists was relatively low. Those times are long gone. The public now places huge faith in technology and those who create it. And there isn’t much choice. We rely on our computers, tablets and smartphones to communicate, study, work, play, bank and a myriad of other daily activities. Our appliances are controlled, and increasingly connected, by small computers. There’s no way around it; technology is an essential element of our lives. Moreover, most people interact far more frequently with technology than with doctors, attorneys or accountants.
While the magnitude of trust placed in technology is very significant, its dual nature is equally important. Not only is technology explicitly expected to safely and reliably perform its advertised services, but it’s implicitly expected to not operate in a manner that is detrimental to concepts of privacy and security. That is to say, while we trust a smartphone’s designers to provide a device that will allow us to place long-distance calls, send and receive email, text and chat messages, and provide navigation services, we also trust them not to include features that allow the cellular carrier, the manufacturer or third parties to remotely turn on the device’s camera or microphone and spy on us. Most people have no way of objectively verifying whether this is or is not the case.
Technology itself is ethically neutral; neither good nor bad. Hardware and software cannot make moral choices – only wetware can do that. Given the level of technological penetration into society, as well as the tremendous scale of both positive and negative outcomes that can result from its use and/or abuse, the time has come for the technology professions to demonstrate ethical maturity and adopt standards of ethical conduct to which we hold ourselves and our peers accountable.