Now on Demand Ransomware Resilience & Recovery Summit - All Sessions Available
Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Application Security

Flash Crashes and Rogue Algorithms: The Case for “Securing” Artificial Intelligence

Algorithms Gone Bad - Securing Artificial Intelligence

Algorithms Gone Bad - Securing Artificial Intelligence

‘Security’ Must be Added to our Existing Ethical and Philosophic Concerns Over Artificial Intelligence and Algorithms

Algorithms run the world. That’s a bit far-fetched, but we’re getting there. They control what we see on Facebook, whether we get hired for a new job, whether we get a bank loan, whether a new piece of code is good or malicious, and whether we should buy or sell shares of stock or currencies. 

Algorithms are used for such purposes because they are good at making probabilistic projections based on past data with no human intervention and at machine speed — they are fast and cheap. But they are not infallible. They present problems that are seriously questioned on both ethical and philosophic grounds; and they have become the basis of fictional Armageddons.

One of their problems is that algorithmic projections are rarely questioned. Where that projection is the automatic trigger for further action, it happens at blinding speed with little or no human oversight. At just after midnight in the UK on Friday morning, October 7, 2016, sterling plunged and lost 8% of its value in just eight minutes in what is known as a flash crash. 

The cause is thought to be computer algorithms. The Guardian reported, “Kathleen Brooks, the research director at the financial betting firm City Index, said: ‘Apparently it was a rogue algorithm that triggered the sell off – These days some algos trade on the back of news sites, and even what is trending on social media sites such as Twitter…’” In this case, the trigger is thought to have been a report in the Financial Times quoting the French president, Francois Hollande, as saying that Britain would have to suffer for the Brexit vote in order to ensure EU unity. It was one too many negative inputs for the algorithm, and the automated response was to ‘sell sterling’.

There have been other recent examples of flash crashes: in 2010 when the Dow Jones Index dropped more than 600 points in 5 minutes; and in 2013 l when a forged Syrian Electronic Army tweet claiming Obama had been injured caused the Dow to briefly fall 143 points.

But we are only now at the beginning of the algorithm (or artificial intelligence) revolution. We cannot stop this revolution. More and more of our daily lives will be permitted or disallowed based on the probability scores output by algorithms; and the output of one algorithm could be taken as the input for others, triggering a domino effect.

If traders had been able to predict and control the October flash crash, then they could have made billions of dollars profit. Where money can be made, criminals will follow. It is time to add ‘security’ to our existing ethical and philosophic concerns over artificial intelligence in order to prevent criminal manipulation. 

Advertisement. Scroll to continue reading.

“At their core, algorithms are software programs or combinations of software programs based on very complicated sets of rules and variables,” Donal Byrne, CEO of Corvil, told SecurityWeek. “They interact with data and other software applications to carry out instructions; but as they are machines, they execute on the scale of microseconds.”

That’s the first problem. “Just as most software is still developed with security as an afterthought, or is deprioritized, the same is true for the software that includes AI capabilities,” explains Dr. Andrea Little Limbago, Chief Social Scientist at Endgame.

“A compiled software application is difficult (but not impossible) to change,” adds Byrne. But it would not be necessary to change the algorithm — an attacker simply needs to know how to manipulate the inputs. “Those software applications interact with each other in very complicated ways. If someone understands how the algorithm works, it can be manipulated in predictable ways. This means that even without changing the software itself, introducing specific input data can allow one to manipulate an algorithm towards a different outcome than expected.”

One defense against this is to use a separate system to monitor the algorithms’ output. These have been called ‘circuit breakers’. A circuit breaker is an ‘overseer’ algorithm or software that can pull the plug — stopping all or a specific portion of the action — whenever it sees anomalous conditions beyond a certain acceptable limit.

But that won’t solve everything.

“If a hacktivist organization wished to target today’s financial markets, it would not be out of the question to masquerade as a legitimate trading organization in order to flood the market (through multiple venues) with bogus orders,” says Byrne. “This would likely trigger multiple circuit breakers and cause the markets to shut down for some period. Using a similar mechanism, it could be possible to target a specific company and attempt to mimic a flash crash on that stock. As all participants’ algorithms are triggered to sell or buy at specific thresholds, you just need to get the right initial stimulus to cause a chain reaction and then you will have an avalanche effect before anyone has a chance to react. Even though the circuit breaker overseers may have stopped the patient zero, the damage done by the large-scale disruption to markets would still be quite grave.”

Tainting input to alter output is not a new idea. When Snowden revealed Prism and XKeyscore, Anonymous suggested everyone should flood the communications channels with words designed to trigger attention — literally so that eavesdroppers would not be able to distinguish intelligence from noise. Variations on this could be used to disrupt trading floors.

“While speculative attacks have existed as long as there have been currencies,” explains Dr Limbago, “these could be triggered not only by manipulating algorithms for trading, but also within forecasting models, for instance. It doesn’t have to be something like the flash crash, but could be anything that prompts society to lose faith in the currency or market. Speculative attacks have a profound psychological component to them, with a strong herd mentality that also could prompt great fluctuations. This could be done through other forms of automated and widespread information manipulation, such as through social media.”

There is no easy solution to the problem of rogue algorithms. Some degree of artificial intelligence is being built into everything. Consider the internet-connected light bulb. It may have the basic intelligence to recognize gloom and adjust its output accordingly. But if the algorithm was compromised during manufacture, an attacker might be able
to throw a single switch to light up millions of bulbs — and the combined effect would create a sudden demand capable of disrupting the entire grid.

On larger systems the problem is more complex: we are using algorithms to develop future algorithms through machine learning. When these algorithms are used within large computer systems — not just trading floors but international corporations — no human being can monitor the volume and speed of the interactions between different parts of the overall system. But those interactions must be monitored to prevent the business equivalent of the financial flash crash. It has to be done by machine. The result is that we are already beginning to use algorithms to monitor the performance of algorithms generated by other algorithms. 

This is just the beginning of the algorithm-driven artificial intelligence revolution. It’s the beginning of what John Danaher calls an algocracy. “By gradually pushing human decision-makers off the loop, we risk creating a ‘black box society’. This is one in which many socially significant decisions are made by ‘black box AI’. That is: inputs are fed into the AI, outputs are then produced, but no one really knows what is going on inside. This would lead to an algocracy, a state of affairs in which much of our lives are governed by algorithms.” The same argument applies to business and finance.

What could go wrong?

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

MSSP Dataprise has appointed Nima Khamooshi as Vice President of Cybersecurity.

Backup and recovery firm Keepit has hired Kim Larsen as CISO.

Professional services company Slalom has appointed Christopher Burger as its first CISO.

More People On The Move

Expert Insights

Related Content

Application Security

Cycode, a startup that provides solutions for protecting software source code, emerged from stealth mode on Tuesday with $4.6 million in seed funding.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Risk Management

The supply chain threat is directly linked to attack surface management, but the supply chain must be known and understood before it can be...

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Cybersecurity Funding

2022 Cybersecurity Year in Review: Top news headlines and trends that impacted the security ecosystem

Endpoint Security

Today, on January 10, 2023, Windows 7 Extended Security Updates (ESU) and Windows 8.1 have reached their end of support dates.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.