Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Pangea Launches AI Guard and Prompt Guard to Combat Gen-AI Security Risks

Guardrail specialist releases new products to aid the development and use of secure gen-AI apps.

AI security specialist Pangea has added to its existing suite of corporate gen-AI security products with AI Guard and Prompt Guard. The first prevents sensitive data leakage from gen-AI applications, while the second defends against prompt engineering, preventing jailbreaks.

According to the current OWASP Top 10 for LLM Applications 2025 (PDF), the number one risk for gen-AI applications comes from ‘prompt injection’, while the number two risk is ‘sensitive information disclosure’ (data leakage). With large organizations each developing close to 1,000 proprietary AI apps, Pangea’s new products are designed to prevent these apps succumbing to their major risks.

Pangea

Prompt engineering is a skill. It is the art of phrasing a gen-AI query in a manner that gets the most accurate and complete response. Malicious prompt engineering is a threat. It is the skill of phrasing a prompt in a way to obtain information, or elicit responses, that either should not be disclosed or could be used in a harmful manner.

Pangea’s new Prompt Guard analyzes human and system prompts to detect and block jailbreak attempts or limit violations. Detection is done through heuristics, classifiers, and other techniques with, in Pangea’s announcement, ‘99% efficacy’.

AI Guard is designed to prevent sensitive data leakage. It blocks malicious or undesirable content, such as profanity, hate speech, and violence. It examines prompt inputs, responses, and data ingestion from external sources to detect and block malicious content. It can prevent attempts to input false information including malware and malicious URLs, and can prevent the release of PII. 

In total, AI Guard employs more than a dozen detection technologies, and can understand over 50 types of confidential and personally identifiable information. It gathers threat intelligence from partners CrowdStrike, DomainTools, and ReversingLabs.

“Prompt engineering,” explains Pangea co-founder and CEO Oliver Friedrichs, “is basically social engineering on a large language model to make it do things that it has been told not to do, circumventing the controls of a typical gen-AI application.” Prompt Guard can identify all common and specialized prompt injection techniques; and if and when new techniques emerge, they will be added to the system.

AI Guard goes further. “It provides prompt injection, detection and prevention,” says Friedrichs. “It also provides malicious entity detection. So, for example, if somebody is inputting a malicious URL or domain name into a prompt, or the application is generating malicious output, it can redact, block or disarm that offending content. It has a dozen different detectors for common things like profanity, sexually explicit content, self-harm, and violence, as well as code and other language. You cannot really deliver enterprise quality AI capabilities without having these security guardrails,” he adds.

Pangea was founded in 2021 by Friedrichs (CEO) and Sourabh Satish (CTO). It has raised a total of $51 million in funding to date.

Advertisement. Scroll to continue reading.

Related: DeepSeek Compared to ChatGPT, Gemini in AI Jailbreak Test

Related: ChatGPT, DeepSeek Vulnerable to AI Jailbreaks

Related: Microsoft Bets $10,000 on Prompt Injection Protections of LLM Email Client

Related: Cyber Insights 2025: Artificial Intelligence

Written By

Kevin Townsend is a Senior Contributor at SecurityWeek. He has been writing about high tech issues since before the birth of Microsoft. For the last 15 years he has specialized in information security; and has had many thousands of articles published in dozens of different magazines – from The Times and the Financial Times to current and long-gone computer magazines.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join this event as we dive into threat hunting tools and frameworks, and explore value of threat intelligence data in the defender’s security stack.

Register

Learn how integrating BAS and Automated Penetration Testing empowers security teams to quickly identify and validate threats, enabling prompt response and remediation.

Register

People on the Move

SplxAI, a startup focused on securing AI agents, has announced new CISO Sandy Dunn.

Phillip Miller is joining tax preparation giant H&R Block as VP and CISO.

Linx Security has appointed Sarit Reiner Frumkes as Chief Technology Officer.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.