Our modern world is strewn with cyber breaches, a proliferation of dangers, regional crises, political unrest, and dangerous threat actors – all at play against a backdrop of an over reliance on the Internet which was never designed to be the backbone of a global economy.
Cybersecurity is and always will be a human-based issue. Conversely, over the last 20 years the primary approach for many organizations has been to purchase and implement defensive technologies and react to security problems when they arose or were identified. There has been this continued perception that if a firewall, IDS, IPS, and other supporting technologies were implemented, the organization could have a reasonable expectation of being “protected.”
However, we know based on results – simply look at the avalanche of security breaches that have occurred across industry and government – that this is not a reasonable expectation in the current landscape.
While we must continue to use defensive technologies because they help address the level of white noise that has become part of the cost to operate in our hyper-connected, digitized world, we can’t stop there.
Shift from a Deterministic to a Probabilistic Approach
This traditional, defensive cybersecurity approach has largely been deterministic in nature, which is a fundamental flaw. We know cyber threats and breaches are probabilistic.
Humans are not inherently wired to think in a probabilistic manner but we must constantly strive to broaden our views by thinking in probabilities. In a probabilistic model, it is acceptable to repeat the same thing and expect different results. This is more in alignment with how cyber threats/events actually occur. The human brain is always trying to conserve energy and think in deterministic black and white models. (Interestingly, the brain is only 2% of body mass, but it consumes 25% of our energy.)
A substantial body of research in psychology and decision-making is based on the premise that these cognitive limitations cause people to employ various simplification strategies and rules of thumb to make judgements and decisions and ease the burden of mentally processing information.
These concepts are not new to other, more mature risk assessment and management disciplines, but when it comes to cybersecurity, which has historically been an information technology issue, taking a probabilistic approach has not occurred thus far.
Scale the Cost of Operating a Resilient Cyber Infrastructure
The very first reality that must be embraced is the cost of cybersecurity in the new digital frontier. The old method that allocated a small fraction of the overall IT budget to security has proven ineffective.
Even if cyber insurance is used as a risk-mitigation strategy, that market is rapidly changing too and in many cases, it isn’t cost effective or enough coverage cannot be acquired. Cyber insurance is not designed to address operational resiliency.
The cost to operate and truly be resilient in the new digital landscape is most likely many times more than the average organization is spending today.
Since it is highly unlikely that most organizations will increase their cybersecurity budgets multiple times over, we must come up with new strategies that can be adopted by the masses and have the potential to improve resiliency of operations.
We need solutions that are practical, but also scale to the size of the threat.
Take the Next Step: Artificial Intelligence
In order to address the scale and velocity of the current cyber threat landscape, statistical modeling is an approach that will play a significant role in the future of cybersecurity. A statistical model is a function that predicts the outcome with one or more predictors (i.e., variables, features). While machine learning, artificial neural networks, and natural language processing are all good candidates for exploration, we’ll focus on machine learning here.
Machine learning opens up the door to gain new insights without the requirements of finite programs. This is critically important at this time since most organizations are unable to find qualified cybersecurity professionals to fill open roles.
We can create a new model based on underlying training data to predict outcomes. This is increasingly important in a diverse and rapidly changing threat landscape and when most organizations have limited budgets and human resources. Performance on a given task improves over time with experience. If you use Google Gmail or shop at amazon.com, then you already know about machine learning.
Machine learning works for quantitative (numeric) and qualitative (groups) questions and problems, which makes it a good candidate for cybersecurity matters.
Machine learning can be broken down into two high-level categories
Supervised learning allows for the creation of predictive models that can be used in cyber threat intelligence operations. The data is split into test and training sets. Training data is used to calculate the model and the test data is used to verify accuracy. Sources for your training data can come from internal places like your SIEM and external resources, such as cyber threat intelligence providers. Models such as regression, Naïve Bayes, and SVM fall into this category. Supervised machine learning can easily be implemented in strategic and operational aspects of most cybersecurity programs.
Unsupervised learning is a model that is created from input data but it is not compared with a test dataset. This method is used to identify patterns. Once again, this can be very useful in cyber threat intelligence matters and can help automate,
scale and build on a proactive-based approach. Artificial neural networks, K Nearest Neighbor and Clustering are all examples of unsupervised learning models.
Cybersecurity will continually evolve over time because the threats are forever changing. The time is now for fundamental changes in our thinking to help keep pace with the threat landscape and to transition to a more proactive and scalable approach that gets us in front of the challenges rather than reacting to them.