Texas startup HiddenLayer has bagged a hefty $50 million in new venture capital funding as investors continue to pour money into new technologies to protect the code flowing in and out of AI and LLM training sets.
HiddenLayer, which emerged from stealth in July 2022 with $6 million in funding, said the latest financing was led by M12, Microsoft’s Venture Fund, and Moore Strategic Ventures.
The Austin company also took on equity investments from Booz Allen Ventures, IBM Ventures, Capital One Ventures, and Ten Eleven Ventures.
HiddenLayer, winner of the ‘Most Innovative Startup’ prize at this year’s RSA Innovation Sandbox, is pitching a future that includes MLMDR (machine learning detection and response) platforms monitoring the inputs and outputs of your machine learning algorithms for anomalous activity consistent with adversarial ML attack techniques.
HiddenLayer says it is building a Machine Learning Security (MLSec) Platform with tooling to protect ML models against adversarial attacks, vulnerabilities, and malicious code injections.
The product is promising real-time defense and response capabilities, including alerting, isolation, profiling, and misleading.
HiddenLayer joins a cadre of new startups tackling the security of AI apps and deployments. Earlier this year, consulting giant KPMG spun out a venture-backed startup called Cranium that’s working on “an end-to-end AI security and trust platform” capable of mapping AI pipelines, validating security, and monitoring for adversarial threats.
The increased investments in AI security follows the heady launch of OpenAI’s ChatGPT app and a race among software providers big and small to embrace the powerful capabilities of LLM (language learning models) and generative AI.
Software giant Microsoft has debuted an AI-powered security analysis tool to automate incident response and threat hunting tasks, showcasing a security use-case generative AI chatbots.
OpenAI has also released a business edition of ChatGPT, promising enterprise-grade security and a commitment not to use client-specific prompts and data in the training of its models.
Related: HiddenLayer Launches With $6 Million to Protect AI Learning Models
Related: Security VCs Pivot to Safeguarding AI Training Models
Related: CalypsoAI Raises $23 Million for AI Security Tech
Related: KPMG Tackles AI Security With Cranium Spinout

Ryan Naraine is Editor-at-Large at SecurityWeek and host of the popular Security Conversations podcast series. He is a security community engagement expert who has built programs at major global brands, including Intel Corp., Bishop Fox and GReAT. Ryan is a founding-director of the Security Tinkerers non-profit, an advisor to early-stage entrepreneurs, and a regular speaker at security conferences around the world.
More from Ryan Naraine
- Bankrupt IronNet Shuts Down Operations
- AWS Using MadPot Decoy System to Disrupt APTs, Botnets
- Progress Software Patches Critical Pre-Auth Flaws in WS_FTP Server Product
- Chinese Gov Hackers Caught Hiding in Cisco Router Firmware
- CISA Unveils New HBOM Framework to Track Hardware Components
- Gem Security Lands $23 Million Series A Funding
- New ‘Sandman’ APT Group Hitting Telcos With Rare LuaJIT Malware
- CrowdStrike to Acquire Application Intelligence Startup Bionic
Latest News
- Bankrupt IronNet Shuts Down Operations
- AWS Using MadPot Decoy System to Disrupt APTs, Botnets
- Generative AI Startup Nexusflow Raises $10.6 Million
- In Other News: RSA Encryption Attack, Meta AI Privacy, ShinyHunters Hacker Guilty Plea
- Researchers Extract Sounds From Still Images on Smartphone Cameras
- National Security Agency is Starting an Artificial Intelligence Security Center
- CISA Warns of Old JBoss RichFaces Vulnerability Being Exploited in Attacks
- Hackers Set Sights on Apache NiFi Flaw That Exposes Many Organizations to Attacks
