Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Protect AI Raises $35 Million to Protect Machine Learning and AI Assets

Machine Learning and Artificial Intelligence security firm Protect AI raised $35 million in Series A funding led by Evolution Equity Partners.

Machine Learning and Artificial Intelligence security firm Protect AI has raised $35 million in a Series A funding round led by Evolution Equity Partners and including Salesforce Ventures and existing investors. It brings the total raised to $48.5 million.

Protect AI, based in Seattle, WA, is a startup company founded in 2022 by Ian Swanson (CEO, formerly AWS worldwide leader for artificial intelligence and machine learning), and Badar Ahmed (CTO, formerly director of engineering at Oracle). Richard Seewald, founder and managing partner at Evolution joins the Protect AI board of directors.

The growth in the use of machine learning and artificial intelligence has created a new layer of risk. This risk is both to the ML, and from the ML. Both ML and artificial intelligence are subject to a new range of adversarial attacks – such as poisoning the training data and manipulating the algorithms used. Threats emanating from compromised ML systems can be bad decisions and reputational damage, or compliance failures leading to regulatory fines.

Protect AI Funding

“Despite the sheer magnitude of the AI/ML security challenge, none of the industry’s largest cybersecurity vendors currently offer a solution to this problem,” warns Seewald.

Protect AI offers a platform called AI Radar, providing visibility into the assets and inventory employed in ML/AI systems. The supply chain growth in foundational models and external third-party training data sets leaves a blind spot to the ML/AI risks, such as regulatory compliance, PII leakages, data manipulation, model poisoning, infrastructure protection, and brand damage.

“Protect AI provides new and innovative solutions, like AI Radar, that enable organizations to build, deploy, and manage safer AI by monitoring, detecting and remediating security vulnerabilities and threats in real-time,” comments Swanson.

The three key pillars of the product are real-time visibility, an immutable bill of materials (dubbed an MLBOM), and pipeline and model security.

Visibility is provided by real time insights into the ML attack surface. The MLBOM is automatically created. It is dynamic, it tracks all components and dependencies in the ML system, and provides visibility and auditability into the supply chain.

Advertisement. Scroll to continue reading.

Security is enabled through AI Radar’s continuous use of Protect AI’s scanning tools for LLMs and other ML inference workloads. It automatically detects security policy violations, model vulnerabilities, and malicious code injection attacks — and integrates with third-party AppSec and CI/CD orchestration tools.

Related: Google Creates Red Team to Test Attacks Against AI Systems

Related: ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications

Related: Cyber Insights 2023 | Artificial Intelligence

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Click to comment

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join security experts as they discuss ZTNA’s untapped potential to both reduce cyber risk and empower the business.

Register

Join Microsoft and Finite State for a webinar that will introduce a new strategy for securing the software supply chain.

Register

Expert Insights

Related Content

Application Security

Cycode, a startup that provides solutions for protecting software source code, emerged from stealth mode on Tuesday with $4.6 million in seed funding.

Cybersecurity Funding

SecurityWeek investigates how political/economic conditions will affect venture capital funding for cybersecurity firms during 2023.

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Cyber Insurance

Cyberinsurance and protection firm Boxx Insurance raises $14.4 million in a Series B funding round led by Zurich Insurance.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Cybersecurity Funding

Network security provider Corsa Security last week announced that it has raised $10 million from Roadmap Capital. To date, the company has raised $50...

Funding/M&A

Thirty-five cybersecurity-related M&A deals were announced in February 2023