Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack

A new attack technique named Policy Puppetry can break the protections of major gen-AI models to produce harmful outputs.

AI jailbreak

A newly devised universal prompt injection technique can break the safety guardrails of all major generative AI models, AI security firm HiddenLayer says.

Called Policy Puppetry, the attack relies on prompts crafted so that the target LLM would interpret them as policies, leading to instruction override and safety alignment bypass.

Gen-AI models are trained to refuse user requests that would result in harmful output, such as those related to CBRN threats (chemical, biological, radiological, and nuclear), self-harm, or violence.

“These models are fine-tuned, via reinforcement learning, to never output or glorify such content under any circumstances, even when the user makes indirect requests in the form of hypothetical or fictional scenarios,” HiddenLayer notes.

Despite this training, however, previous research has demonstrated that AI jailbreaking is possible using methods such as Context Compliance Attack (CCA) or narrative engineering, and that threat actors are using various prompt engineering techniques to exploit AI for nefarious purposes.

According to HiddenLayer, its newly devised technique can be used to extract harmful content from any frontier AI model, as it relies on prompts crafted to appear as policy files and does not depend on any policy language.

“By reformulating prompts to look like one of a few types of policy files, such as XML, INI, or JSON, an LLM can be tricked into subverting alignments or instructions. As a result, attackers can easily bypass system prompts and any safety alignments trained into the models,” HiddenLayer says.

If the LLM interprets the prompt as policy, safeguards are bypassed, and attackers can add extra sections to control the output format and override specific instructions, improving the Policy Puppetry attack.

Advertisement. Scroll to continue reading.

“Policy attacks are extremely effective when handcrafted to circumvent a specific system prompt and have been tested against a myriad of agentic systems and domain-specific chat applications,” HiddenLayer notes.

The cybersecurity firm tested the Policy Puppetry technique against popular gen-AI models from Anthropic, DeepSeek, Google, Meta, Microsoft, Mistral, OpenAI, and Qwen, and successfully demonstrated its effectiveness against all, albeit with some minor adjustments in some cases.

The universal bypass for all LLMs shows that AI models cannot truly monitor themselves for dangerous content and that they require additional security tools. Multiple such bypasses lower the bar for creating attacks and mean that anyone can easily learn how to take control of a model.

“Being the first post-instruction hierarchy alignment bypass that works against almost all frontier AI models, this technique’s cross-model effectiveness demonstrates that there are still many fundamental flaws in the data and methods used to train and align LLMs, and additional security tools and detection methods are needed to keep LLMs safe,” HiddenLayer notes.

Related: Bot Traffic Surpasses Humans Online—Driven by AI and Criminal Innovation

Related: AI Hallucinations Create a New Software Supply Chain Threat

Related: AI Giving Rise of the ‘Zero-Knowledge’ Threat Actor

Related: How Agentic AI Will Be Weaponized for Social Engineering Attacks

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join this event as we dive into threat hunting tools and frameworks, and explore value of threat intelligence data in the defender’s security stack.

Register

Learn how integrating BAS and Automated Penetration Testing empowers security teams to quickly identify and validate threats, enabling prompt response and remediation.

Register

People on the Move

Shane Barney has been appointed CISO of password management and PAM solutions provider Keeper Security.

Edge Delta has appointed Joan Pepin as its Chief Information Security Officer.

Vats Srivatsan has been appointed interim CEO of WatchGuard after Prakash Panjwani stepped down.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.