As humans, we all have biases that threaten to overrule sensible decision making. The list of cognitive biases on Wikipedia numbers well over 150.
Some are familiar, such as the “Bandwagon Effect” – the tendency to do or believe things because many other people do or believe the same, or “Confirmation Bias” – the tendency to search for, interpret, focus on and remember information in a way that confirms one’s preconceptions.
Others are amusing, such as the “IKEA Effect” – the tendency for people to place a disproportionately high value on objects that they partially assembled themselves, regardless of the quality of the end result. We can see some of ourselves in that, both at home and in the projects we work on in the corporate world.
The challenge in Information Security is recognizing how these biases affect our judgment in evaluating and responding to threats and taking steps to mitigate them.
An aviation illustration
On August 24, 2001, Air Transat Flight 236, an Airbus A330 bound for Lisbon, Portugal from Toronto, Canada lost all power over the Atlantic Ocean due to complete loss of fuel. The pilots managed to divert and land in the Azores, with only minor injuries to passengers – quite a feat considering they had to glide for 65 nautical miles.
Improper maintenance was the cause of the fuel leak. The initial indications were a low oil temperature and a high oil pressure in the affected engine. But there was no obvious connection with a fuel leak, so the pilots regarded those as false alarms, developing a confirmation bias as more information came in. Eventually, a fuel imbalance was indicated as the right side tank emptied. The prescription, therefore, was to transfer fuel from the left to right tanks, ultimately bleeding the entire aircraft.
In information security, where there is no shortage of alarms, biases can play a similar role in contributing to overlooking the real threat.
Biases in information security
We have seen biases manifest in attacks such as the one against Sony in 2011, where a DDoS attack from Anonymous consumed the security team’s attention while the personal information of 100 million customers was being stolen. This is an example of attentional bias where the urgency of the alarms indicating a DDoS event distracted from the more significant attack from an impact on reputation perspective.
We’re all human and subject to bias. In the 2014 Global State of Information Security Survey, PwC found that 73% of North American executives surveyed believe that their security programs are effective. Yet in that same year, the Ponemon Cost of Cyber Crime Study indicated that there were 138 attacks on 257 companies that resulted in an economic impact. This is an example of optimism bias that can have a detrimental effect on managing risk.
Mitigating our biases
One approach that holds promise is security analytics. Based on machine learning applied to data (big or otherwise), it looks for spikes or deviations from normal patterns that can indicate something that might otherwise be missed by humans because of biases.
For example, if a privileged user’s credentials are obtained by an outside attacker and he attempts to access sensitive files from an unknown machine or an abnormal geographic location, an alarm can be raised. Analytics could then review all of this user’s recent activity for risky behavior (such as data exfiltration), and through integration with identity and access management, potentially revoke access temporarily while a security team reviews the incident.
There is potential for the algorithms used to be incomplete, but a partnership between IT security professionals and security analytics holds the promise of finding threats that are missed today. Unlike you and me, machines don’t have confirmation or optimism biases. I hear they’re not big fans of IKEA either.