Network Security

Disrupting Cybercriminal Strategy With AI and Automation

Organizations Need to be Skeptical When Looking at Any Vendor Claiming to Offer AI-based Security

<p style="text-align: center;"><span><span style="font-family: &quot;trebuchet ms&quot;, geneva;"><strong><span>Organizations Need to be Skeptical When Looking at Any Vendor Claiming to Offer AI-based Security</span></strong></span></span></p>

Organizations Need to be Skeptical When Looking at Any Vendor Claiming to Offer AI-based Security

Spending on cybersecurity solutions continues at a rapid pace. According to IDC, global spending on cybersecurity this year is predicted to grow by nearly 10% over 2018, topping $103 billion – with large organizations accounting for almost two-thirds of that outlay. However, in spite of this, some predict the total cost of cybercrime could exceed $2 trillion by the end of 2019. This means that the cost of criminal activity is currently outpacing security spend by around 20X.

This is the result of a system that has always been rigged in favor of the cybercriminal. It’s the classic scenario of organizations having to anticipate and block 100% of the attacks they will encounter, while cybercriminals only need to exploit a single misconfigured device or unpatched system to get in. The only difference now is, in the wake of global digital transformation, the stakes are much higher than ever before.

Three Critical Security Strategies

Of course, repeating the same behavior over and over and expecting different results is part of the problem. To win this war, you need to rethink your security strategy, and changing your security paradigm involves three basic approaches.

1. Start with Security. Rather than building a network and then overlaying security, start with security in mind. Today’s security policies need to flow seamlessly and enforce policies consistently across your distributed network, from your core network to the cloud, and from the OT network to your branch offices and mobile workers. 

2. Exploit Cybercriminal Economics. Cybercriminals are subject to the same financial restraints as any organization. Profitability requires keeping costs and overhead lower than revenue. This means that most criminals prefer to target low-hanging fruit using known exploits because developing new tools and zero-day attacks are expensive. You can eliminate a lot of risk by doing the following: exercise good security hygiene, discover and remove security gaps,  centralize visibility and control, settle on an integrated security framework based on interoperability, high performance, and deep integration, and segment the network to restrict or slow down the lateral movement of malware looking for data to steal and devices to exploit. 

3. Fight Fire with Fire. Business and cybercrime alike operate at digital speeds. Many cyber events are successful because they happen faster than security systems can respond. This is especially true if human intervention is required in any step of the process. Instead, critical events need to trigger an immediate response. Of course, automation can only respond to known threats. And while adding machine learning allows automated systems to better identify unusual or abnormal behavior and reduce false positives, the process is often slow.

Advertisement. Scroll to continue reading.

The Critical Need for Artificial Intelligence

Unlike automation and machine learning, AI attempts to replicate the analytical processes of human intelligence not only to enable decision-making at machine speeds, but over time, can even begin to predict and prevent security events before they occur. Of course, this technology is far more challenging to achieve, and it is why organizations need to be skeptical when looking at any vendor claiming to offer AI-based security.

A true AI system requires an artificial neural network (ANN) combined with a deep-learning model to not only accelerate data analysis and decision making, but also enable the network to adapt and evolve when encountering new information. This extensive training process includes being carefully fed massive amounts of increasingly complex information so it can not only identify patterns and develop problem-solving strategies, but also adjust those problem-solving algorithms when it encounters a new pattern.

Training an AI

One of the most critical lines of inquiry when examining a solution that claims to provide AI is about how it was trained. The AI community recommends that any AI solution undergo three stages of training: 

1. Supervised learning. This initial model begins by feeding the AI system with massive amounts of labeled data, where the characteristics of each data set are clearly labeled, and decisions are predictable. As an example, Fortinet’s AI development team leverages the data produced by our more than 200 FortiGuard Lab researchers who currently log over 580,000 hours of research data each year. In addition, it ingests data collected from devices and sensors deployed worldwide, including data feeds from our threat intelligence sources. Ultimately, it is this level and volume of input that allows our AI systems to continually improve by expand their set of recognizable patterns and responses.

2. Unsupervised learning. In this next phase, unlabeled data is slowly introduced, forcing the system to learn on its own as it starts to see and recognize new patterns. 

3. Reinforcement. Both of these processes monitor the system’s performance with familiar and unfamiliar files and “rewarding” the system for good results. Training cycles between these three learning strategies on an ongoing basis for months, or sometimes years, depending on the complexity of the problems it needs to identify and resolve.

Because of the recursive requirements of the learning process, any AI system that does not use all three of these learning models is incomplete. Each learning model helps refine results and improve accuracy.

And naturally, because the threat environment continues to evolve, AI training models cannot afford to be static. The system needs to be constantly infused with new models that branch off from existing information, based on new threats and techniques as well as new strategies for identification and resolution. And ongoing monitoring must also be applied, as its effectiveness is only good as the data it consumes. AI can be unintentionally poisoned with bad data, creating a bias that impacts its ability to make good decisions, or be intentionally poisoned to miss certain types of threats, 

AI Disrupts the Entire Cybercriminal Strategy

Many cybersecurity companies claim to have introduced AI capabilities into their solutions. But the reality is, most fall short of true AI because their underlying infrastructure is too small or their learning models are incomplete. Others refuse to divulge the methods that they use, which raises concerns about the reliability of their AI. These should be red flags for any organization looking to adopt an AI-based system.

Just as important, even if an AI system meets basic training and infrastructure requirements, it still needs to interoperate within the security environment you have in place. Intelligence in isolation is useless. The more threat intelligence is shared – whether from an external intelligence feed or the integrated security systems deployed across your distributed network – the more effective your AI-based defensive systems will become. 

But when done right, an AI-based system will give your organization an advantage over even the most sophisticated cybercriminals. It weaves security deep into your infrastructure, identifies and responds to the most advanced threats, and forces criminals to either go back to the drawing board, or more likely, look for a victim that doesn’t have such an impact on their bottom line.

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version