Connect with us

Hi, what are you looking for?


Artificial Intelligence

HiddenLayer Emerges From Stealth With $6 Million to Protect AI Learning Models

Startup raises $6M to develop machine learning detection and response (MLDR) platform

HiddenLayer is designed to protect the AI machine learning models that protect companies from attackers.

Startup raises $6M to develop machine learning detection and response (MLDR) platform

HiddenLayer is designed to protect the AI machine learning models that protect companies from attackers.

Artificial intelligence (AI) is increasingly used in cybersecurity products, but it remains a new technology. As such, while it is used to help protect customers’ systems, there is little that yet protects the AI itself. HiddenLayer has emerged from stealth with $6 million seed funding to protect the machine learning models: it is the first of what may become a new breed of machine learning detection and response (MLDR) platforms.

HiddenLayer LogoAdversaries are not simply yielding the ground to AI defenses – they are increasingly developing techniques to attack the AI defenses to nullify the defense and perhaps turn it against the user company.

HiddenLayer was founded by Chris Sestito (CEO), Tanner Burns, and James Ballard CTO). Sestito and Burns both have a background at Cylance (one of the earliest producers of AI-based security). “We were building machine learning models at Cylance to detect malicious threats,” Sestito told SecurityWeek. “Such models are a prime example of a target that adversarial machine learning techniques could be used against, because once you can bypass that model, you can bypass the cybersecurity product altogether.”

If you can subvert the machine learning provided by company X, you can potentially evade detection in all of X’s customers. It was a lesson learned at Cylance: companies unknowingly create vulnerabilities in their machine Learning models for which there are no known commercially available security controls.

“We led the relief effort after [the] machine learning model was attacked directly through [the Cylance] product and realized this would be an enormous problem for any organization deploying ML models in their products,” said Sestito. “We decided to found HiddenLayer to both educate enterprises about this significant threat and help them defend against it.”

There are four primary types of attack against ML models that HiddenLayer can detect: inference, data poisoning, extraction, and evasion.

Advertisement. Scroll to continue reading.

“Inference,” said Sestito, “is the process of using the input and output to a model to learn how the model makes its decisions. This can lead to threat actors understanding intellectual property, tampering with the model, and ultimately impacting critical business functions.”

Data poisoning is the process of interfering with the data used for learning, with the intention of making the model act differently than it should. “This can allow threat actors to create blind spots in the model to get a desired outcome,” he explained.

Extraction is an advanced inference attack where an attacker can steal private data from the model or a full copy of the model itself and attack it in their own environment.

“Evasion,” said Sestito, “is a form of inference attack where the attacker learns how to bypass the intended use of the model.”

HiddenLayer uses a machine learning approach to defend machine learning. It analyzes billions of model interactions per minute to identify malicious activity without requiring access to or prior knowledge of the user’s ML model or sensitive training data. It detects and responds to attacks against ML models to protect intellectual property and trade secrets from theft or tampering and ensure users are not exposed to attacks.

Because it merely analyzes the process of ML data learning, HiddenLayer doesn’t know or need to know the source of the data nor the purpose of the final AI system. It isn’t involved in the ethical issues of artificial intelligence – but Sestito has his personal views. “Provided the source of the data used for ML training is ethically and legally obtained, the purpose of the AI will almost certainly be good and beneficial,” he told SecurityWeek. The implication is that ethics in AI should be focused on the collection of data, not its use.

“Machine learning algorithms are rapidly becoming a vital and differentiating aspect of more and more of the technology products we depend on every day,” said Todd Weber of Ten Eleven Ventures. “Protecting the algorithms at the very center of a company’s competitive advantage will become an essential part of a company’s cyber defenses – these algorithms will become the new ‘crown jewels’.” 

HiddenLayer was founded in March 2022. It is based in Austin, Texas, and is backed by cybersecurity investment specialist firm Ten Eleven Ventures.

Related: Cyber Insights 2022: Adversarial AI

Related: Bias in Artificial Intelligence: Can AI be Trusted?

Related: Becoming Elon Musk – the Danger of Artificial Intelligence

Related: EU Proposes Rules for Artificial Intelligence to Limit Risks

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join security experts as they discuss ZTNA’s untapped potential to both reduce cyber risk and empower the business.


Join Microsoft and Finite State for a webinar that will introduce a new strategy for securing the software supply chain.


Expert Insights

Related Content

Application Security

Cycode, a startup that provides solutions for protecting software source code, emerged from stealth mode on Tuesday with $4.6 million in seed funding.

Cybersecurity Funding

SecurityWeek investigates how political/economic conditions will affect venture capital funding for cybersecurity firms during 2023.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Cyber Insurance

Cyberinsurance and protection firm Boxx Insurance raises $14.4 million in a Series B funding round led by Zurich Insurance.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.


Thirty-five cybersecurity-related M&A deals were announced in February 2023

Cybersecurity Funding

Network security provider Corsa Security last week announced that it has raised $10 million from Roadmap Capital. To date, the company has raised $50...