Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Artificial Intelligence in Cybersecurity is Not Delivering on its Promise

The Cybersecurity Industry Doesn’t Have Artificial Intelligence Right Yet, But it is Promising Technology

The Cybersecurity Industry Doesn’t Have Artificial Intelligence Right Yet, But it is Promising Technology

The application of artificial intelligence (AI) via the implementation of machine learning (ML) is the fastest growing area of cybersecurity. We are told that that ML-enhanced products produce results faster and more accurately than can be achieved by human operators; and this can result in cost savings through the need for fewer analyst employees.

What has been largely missing from this assertion is independent verification that the theoretical benefits promoted by ML vendors translate to actual benefits in use.

ProtectWise, a network detection and response firm that itself employs ML, commissioned Osterman Research to gauge the enterprise users’ reaction to the real-life performance of machine learning. Osterman surveyed (PDF) more than 400 individuals in companies with more than 1,000 employees with knowledge of their company’s security operations. The result is not favorable for those vendors that base their marketing on the use of AI or ML in their products.

(For the record, while there are technical differences between AI and ML, most people use the term interchangeably. ProtectWise uses ‘AI’ throughout its analysis of the survey. However, Ramon Peypoch chief product officer at ProtectWise, confirmed to SecurityWeek that the survey respondents most probably did not separate AI and ML in their responses. For that reason — and because it is the more common term in cybersecurity products, we will use the descriptor, ‘ML’.)

The use of ML-enhanced products is well-established. Seventy-three percent of the respondents have already implemented ML-enhanced security products. In general, then, their answers are based on actual experience.

Artificial Intelligence in Cybersecurity

Interestingly, the ML advocates within enterprises tend to be executives (55% IT executives and 38% non-IT executives) rather than the security professionals who will implement and use the products. More directly driven by bottom-line budget considerations, such executives might be more susceptible to marketing claims. Certainly, the survey shows that interest in ML is heavily driven by the prospect of improved efficiency and triaging, making the likelihood of lower staff requirements an apparent benefit.

However, despite the current use and continuing interest in ML, actual experience post-deployment is not so positive. Forty-six percent of the respondents complain that rules creation and implementation is ‘burdensome’. And post-implementation, the results are not as promising as the hype.

Advertisement. Scroll to continue reading.

Sixty-one percent of the respondents believe that their ML systems do not stop zero-days and advanced threats — despite this being one of the primary claims from many vendors. 

For a more finely grained analysis, the study separates respondents into two groups — those with less than 10% of their deployed products employing ML, and those with more than 11% doing so. The primary criticism of ML is clearly confirmed: ML produces a high number of false alerts. Thirty percent of the group one respondents experience more than 50% false positives. This increases with group two respondents, 43.8% of whom get more than 50% false positives. A clear implication of this is that the more ML products you use, the more false positives you will get.

ML offers configuration possibilities that can decrease the risk of false positives. Unfortunately, this comes with the risk of increasing false negatives — and the respondents already believe that ML is not as good as promised at detecting zero-days and advanced threats. Finding the right balance is difficult.

The ProtectWise conclusion is not that ML doesn’t work, but that the vendor industry hasn’t got it right yet. It should perhaps still be seen as a promising technology, not a false technology. “While the full potential of AI has yet to be realized,” it suggests, “it holds the promise of seriously addressing the cybersecurity skills shortage — it may not be a ‘silver bullet’, but it may be a silver-plated one.”

“The onus,” Peypoch told SecurityWeek, “is clearly on vendors to improve the delivery of better results from their ML-enhanced products.” This survey tells them where they must improve: ease of use, better detection of advanced threats while doing so with fewer false positives. Current implementations of ML can lead to an increase in staff requirements because of the difficulty in use and the high number of false positives. This equation must be reversed before the industry can claim to be delivering on its promise.

ProtectWise was founded in 2013 by Gene Stevens (CTO) and Scott Chasin (CEO). It is headquartered in Denver, CO, and it has raised more than $78 million to date.

*Updated to clarify that ProtectWise has raised over $78 million

Related: Winning the Cyber Arms Race with Machine Learning 

Related: It’s Time For Machine Learning to Prove Its Own Hype 

Related: Demystifying Machine Learning: Turning Buzzword Into Benefits

Related: The Malicious Use of Artificial Intelligence in Cybersecurity

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Understand how to go beyond effectively communicating new security strategies and recommendations.

Register

Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.

Register

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Endpoint Security

Today, on January 10, 2023, Windows 7 Extended Security Updates (ESU) and Windows 8.1 have reached their end of support dates.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Compliance

Government agencies in the United States have made progress in the implementation of the DMARC standard in response to a Department of Homeland Security...

Network Security

Attack surface management is nothing short of a complete methodology for providing effective cybersecurity. It doesn’t seek to protect everything, but concentrates on areas...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.