Artificial Intelligence

HiddenLayer Emerges From Stealth With $6 Million to Protect AI Learning Models

Startup raises $6M to develop machine learning detection and response (MLDR) platform

HiddenLayer is designed to protect the AI machine learning models that protect companies from attackers.

<p style="text-align: center;"><strong><span><span>Startup raises $6M to develop machine learning detection and response (MLDR) platform</span></span></strong></p><p><span><span><span style="font-family: &quot;trebuchet ms&quot;, geneva;"><span><span>HiddenLayer is designed to protect the AI machine learning models that protect companies from attackers.</span></span></span></span></span></p>

Startup raises $6M to develop machine learning detection and response (MLDR) platform

HiddenLayer is designed to protect the AI machine learning models that protect companies from attackers.

Artificial intelligence (AI) is increasingly used in cybersecurity products, but it remains a new technology. As such, while it is used to help protect customers’ systems, there is little that yet protects the AI itself. HiddenLayer has emerged from stealth with $6 million seed funding to protect the machine learning models: it is the first of what may become a new breed of machine learning detection and response (MLDR) platforms.

Adversaries are not simply yielding the ground to AI defenses – they are increasingly developing techniques to attack the AI defenses to nullify the defense and perhaps turn it against the user company.

HiddenLayer was founded by Chris Sestito (CEO), Tanner Burns, and James Ballard CTO). Sestito and Burns both have a background at Cylance (one of the earliest producers of AI-based security). “We were building machine learning models at Cylance to detect malicious threats,” Sestito told SecurityWeek. “Such models are a prime example of a target that adversarial machine learning techniques could be used against, because once you can bypass that model, you can bypass the cybersecurity product altogether.”

If you can subvert the machine learning provided by company X, you can potentially evade detection in all of X’s customers. It was a lesson learned at Cylance: companies unknowingly create vulnerabilities in their machine Learning models for which there are no known commercially available security controls.

“We led the relief effort after [the] machine learning model was attacked directly through [the Cylance] product and realized this would be an enormous problem for any organization deploying ML models in their products,” said Sestito. “We decided to found HiddenLayer to both educate enterprises about this significant threat and help them defend against it.”

There are four primary types of attack against ML models that HiddenLayer can detect: inference, data poisoning, extraction, and evasion.

“Inference,” said Sestito, “is the process of using the input and output to a model to learn how the model makes its decisions. This can lead to threat actors understanding intellectual property, tampering with the model, and ultimately impacting critical business functions.”

Advertisement. Scroll to continue reading.

Data poisoning is the process of interfering with the data used for learning, with the intention of making the model act differently than it should. “This can allow threat actors to create blind spots in the model to get a desired outcome,” he explained.

Extraction is an advanced inference attack where an attacker can steal private data from the model or a full copy of the model itself and attack it in their own environment.

“Evasion,” said Sestito, “is a form of inference attack where the attacker learns how to bypass the intended use of the model.”

HiddenLayer uses a machine learning approach to defend machine learning. It analyzes billions of model interactions per minute to identify malicious activity without requiring access to or prior knowledge of the user’s ML model or sensitive training data. It detects and responds to attacks against ML models to protect intellectual property and trade secrets from theft or tampering and ensure users are not exposed to attacks.

Because it merely analyzes the process of ML data learning, HiddenLayer doesn’t know or need to know the source of the data nor the purpose of the final AI system. It isn’t involved in the ethical issues of artificial intelligence – but Sestito has his personal views. “Provided the source of the data used for ML training is ethically and legally obtained, the purpose of the AI will almost certainly be good and beneficial,” he told SecurityWeek. The implication is that ethics in AI should be focused on the collection of data, not its use.

“Machine learning algorithms are rapidly becoming a vital and differentiating aspect of more and more of the technology products we depend on every day,” said Todd Weber of Ten Eleven Ventures. “Protecting the algorithms at the very center of a company’s competitive advantage will become an essential part of a company’s cyber defenses – these algorithms will become the new ‘crown jewels’.” 

HiddenLayer was founded in March 2022. It is based in Austin, Texas, and is backed by cybersecurity investment specialist firm Ten Eleven Ventures.

Related: Cyber Insights 2022: Adversarial AI

Related: Bias in Artificial Intelligence: Can AI be Trusted?

Related: Becoming Elon Musk – the Danger of Artificial Intelligence

Related: EU Proposes Rules for Artificial Intelligence to Limit Risks

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version