‘Security’ Must be Added to our Existing Ethical and Philosophic Concerns Over Artificial Intelligence and Algorithms
Algorithms run the world. That’s a bit far-fetched, but we’re getting there. They control what we see on Facebook, whether we get hired for a new job, whether we get a bank loan, whether a new piece of code is good or malicious, and whether we should buy or sell shares of stock or currencies.
Algorithms are used for such purposes because they are good at making probabilistic projections based on past data with no human intervention and at machine speed — they are fast and cheap. But they are not infallible. They present problems that are seriously questioned on both ethical and philosophic grounds; and they have become the basis of fictional Armageddons.
One of their problems is that algorithmic projections are rarely questioned. Where that projection is the automatic trigger for further action, it happens at blinding speed with little or no human oversight. At just after midnight in the UK on Friday morning, October 7, 2016, sterling plunged and lost 8% of its value in just eight minutes in what is known as a flash crash.
The cause is thought to be computer algorithms. The Guardian reported, “Kathleen Brooks, the research director at the financial betting firm City Index, said: ‘Apparently it was a rogue algorithm that triggered the sell off – These days some algos trade on the back of news sites, and even what is trending on social media sites such as Twitter…’” In this case, the trigger is thought to have been a report in the Financial Times quoting the French president, Francois Hollande, as saying that Britain would have to suffer for the Brexit vote in order to ensure EU unity. It was one too many negative inputs for the algorithm, and the automated response was to ‘sell sterling’.
There have been other recent examples of flash crashes: in 2010 when the Dow Jones Index dropped more than 600 points in 5 minutes; and in 2013 l when a forged Syrian Electronic Army tweet claiming Obama had been injured caused the Dow to briefly fall 143 points.
But we are only now at the beginning of the algorithm (or artificial intelligence) revolution. We cannot stop this revolution. More and more of our daily lives will be permitted or disallowed based on the probability scores output by algorithms; and the output of one algorithm could be taken as the input for others, triggering a domino effect.
If traders had been able to predict and control the October flash crash, then they could have made billions of dollars profit. Where money can be made, criminals will follow. It is time to add ‘security’ to our existing ethical and philosophic concerns over artificial intelligence in order to prevent criminal manipulation.
“At their core, algorithms are software programs or combinations of software programs based on very complicated sets of rules and variables,” Donal Byrne, CEO of Corvil, told SecurityWeek. “They interact with data and other software applications to carry out instructions; but as they are machines, they execute on the scale of microseconds.”
That’s the first problem. “Just as most software is still developed with security as an afterthought, or is deprioritized, the same is true for the software that includes AI capabilities,” explains Dr. Andrea Little Limbago, Chief Social Scientist at Endgame.
“A compiled software application is difficult (but not impossible) to change,” adds Byrne. But it would not be necessary to change the algorithm — an attacker simply needs to know how to manipulate the inputs. “Those software applications interact with each other in very complicated ways. If someone understands how the algorithm works, it can be manipulated in predictable ways. This means that even without changing the software itself, introducing specific input data can allow one to manipulate an algorithm towards a different outcome than expected.”
One defense against this is to use a separate system to monitor the algorithms’ output. These have been called ‘circuit breakers’. A circuit breaker is an ‘overseer’ algorithm or software that can pull the plug — stopping all or a specific portion of the action — whenever it sees anomalous conditions beyond a certain acceptable limit.
But that won’t solve everything.
“If a hacktivist organization wished to target today’s financial markets, it would not be out of the question to masquerade as a legitimate trading organization in order to flood the market (through multiple venues) with bogus orders,” says Byrne. “This would likely trigger multiple circuit breakers and cause the markets to shut down for some period. Using a similar mechanism, it could be possible to target a specific company and attempt to mimic a flash crash on that stock. As all participants’ algorithms are triggered to sell or buy at specific thresholds, you just need to get the right initial stimulus to cause a chain reaction and then you will have an avalanche effect before anyone has a chance to react. Even though the circuit breaker overseers may have stopped the patient zero, the damage done by the large-scale disruption to markets would still be quite grave.”
Tainting input to alter output is not a new idea. When Snowden revealed Prism and XKeyscore, Anonymous suggested everyone should flood the communications channels with words designed to trigger attention — literally so that eavesdroppers would not be able to distinguish intelligence from noise. Variations on this could be used to disrupt trading floors.
“While speculative attacks have existed as long as there have been currencies,” explains Dr Limbago, “these could be triggered not only by manipulating algorithms for trading, but also within forecasting models, for instance. It doesn’t have to be something like the flash crash, but could be anything that prompts society to lose faith in the currency or market. Speculative attacks have a profound psychological component to them, with a strong herd mentality that also could prompt great fluctuations. This could be done through other forms of automated and widespread information manipulation, such as through social media.”
There is no easy solution to the problem of rogue algorithms. Some degree of artificial intelligence is being built into everything. Consider the internet-connected light bulb. It may have the basic intelligence to recognize gloom and adjust its output accordingly. But if the algorithm was compromised during manufacture, an attacker might be able
to throw a single switch to light up millions of bulbs — and the combined effect would create a sudden demand capable of disrupting the entire grid.
On larger systems the problem is more complex: we are using algorithms to develop future algorithms through machine learning. When these algorithms are used within large computer systems — not just trading floors but international corporations — no human being can monitor the volume and speed of the interactions between different parts of the overall system. But those interactions must be monitored to prevent the business equivalent of the financial flash crash. It has to be done by machine. The result is that we are already beginning to use algorithms to monitor the performance of algorithms generated by other algorithms.
This is just the beginning of the algorithm-driven artificial intelligence revolution. It’s the beginning of what John Danaher calls an algocracy. “By gradually pushing human decision-makers off the loop, we risk creating a ‘black box society’. This is one in which many socially significant decisions are made by ‘black box AI’. That is: inputs are fed into the AI, outputs are then produced, but no one really knows what is going on inside. This would lead to an algocracy, a state of affairs in which much of our lives are governed by algorithms.” The same argument applies to business and finance.
What could go wrong?