Texas startup HiddenLayer has bagged a hefty $50 million in new venture capital funding as investors continue to pour money into new technologies to protect the code flowing in and out of AI and LLM training sets.
HiddenLayer, which emerged from stealth in July 2022 with $6 million in funding, said the latest financing was led by M12, Microsoft’s Venture Fund, and Moore Strategic Ventures.
The Austin company also took on equity investments from Booz Allen Ventures, IBM Ventures, Capital One Ventures, and Ten Eleven Ventures.
HiddenLayer, winner of the ‘Most Innovative Startup’ prize at this year’s RSA Innovation Sandbox, is pitching a future that includes MLMDR (machine learning detection and response) platforms monitoring the inputs and outputs of your machine learning algorithms for anomalous activity consistent with adversarial ML attack techniques.
HiddenLayer says it is building a Machine Learning Security (MLSec) Platform with tooling to protect ML models against adversarial attacks, vulnerabilities, and malicious code injections.
The product is promising real-time defense and response capabilities, including alerting, isolation, profiling, and misleading.
HiddenLayer joins a cadre of new startups tackling the security of AI apps and deployments. Earlier this year, consulting giant KPMG spun out a venture-backed startup called Cranium that’s working on “an end-to-end AI security and trust platform” capable of mapping AI pipelines, validating security, and monitoring for adversarial threats.
The increased investments in AI security follows the heady launch of OpenAI’s ChatGPT app and a race among software providers big and small to embrace the powerful capabilities of LLM (language learning models) and generative AI.
Software giant Microsoft has debuted an AI-powered security analysis tool to automate incident response and threat hunting tasks, showcasing a security use-case generative AI chatbots.
OpenAI has also released a business edition of ChatGPT, promising enterprise-grade security and a commitment not to use client-specific prompts and data in the training of its models.