Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

New CCA Jailbreak Method Works Against Most AI Models

Two Microsoft researchers have devised a new jailbreak method that bypasses the safety mechanisms of most AI systems.

AI jailbreak

Two Microsoft researchers have devised a new, optimization-free jailbreak method that can effectively bypass the safety mechanisms of most AI systems.

Called Context Compliance Attack (CCA), the method exploits a fundamental architectural vulnerability present within many deployed gen-AI solutions, subverting safeguards and enabling otherwise suppressed functionality.

“By subtly manipulating conversation history, CCA convinces the model to comply with a fabricated dialogue context, thereby triggering restricted behavior,” Microsoft’s Mark Russinovich and Ahmed Salem explain in a research paper (PDF).

“Our evaluation across a diverse set of open-source and proprietary models demonstrates that this simple attack can circumvent state-of-the-art safety protocols,” the researchers say.

While other jailbreak methods targeting AI focus on crafted prompt sequences or prompt optimizations, CCA relies on inserting a manipulated conversation history in a dialogue on a sensitive topic and responding affirmatively to a fabricated question.

“Convinced by the manipulated dialogue, the AI system generates output that adheres to the perceived conversational context, thereby breaching its safety constraints,” the researchers say.

Russinovich and Salem tested CCA against multiple leading AI systems, including Claude, DeepSeek, Gemini, various GPT models, Llama, Phi, and Yi, demonstrating that nearly all models are vulnerable, except for Llama-2.

For their evaluation, the researchers used 11 sensitive tasks corresponding to as many categories of potentially harmful content, and executed CCA in five independent trials. Most tasks, they say, were completed on the first trial.

Advertisement. Scroll to continue reading.

The issue is that many chatbots depend on the clients supplying “the entire conversation history with each request” and trust the integrity of the context being provided. Open source models, where the user has complete control over input history, are most vulnerable.

“It’s important to note, however, that systems which maintain conversation state on their servers—such as Copilot and ChatGPT —are not susceptible to this attack,” the researchers note.

The researchers propose server-side history maintenance, which ensures consistency and integrity, and implementation of digital signatures for conversations history as mitigations against CCA and similar attacks relying on the injection of malicious context.

These mitigations, they note, are primarily applicable to black-box models, while white-box models, need a “more involved defense strategy”, such as the integration of cryptographic signatures into the AI system’s input processing, to ensure that the model only accepts authenticated and unaltered context.

Related: DeepSeek Compared to ChatGPT, Gemini in AI Jailbreak Test

Related: ChatGPT Jailbreak: Researchers Bypass AI Safeguards Using Hexadecimal Encoding and Emojis

Related: Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique

Related: In Other News: Fake Lockdown Mode, New Linux RAT, AI Jailbreak, Country’s DNS Hijacked

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join this event as we dive into threat hunting tools and frameworks, and explore value of threat intelligence data in the defender’s security stack.

Register

Learn how integrating BAS and Automated Penetration Testing empowers security teams to quickly identify and validate threats, enabling prompt response and remediation.

Register

People on the Move

Security awareness training firm KnowBe4 has named Bryan Palma as president and CEO effective May 5.

Threat intelligence firm Team Cymru has appointed Joe Sander as its Chief Executive Officer.

Madhu Gottumukkala has been named Deputy Director of the cybersecurity agency CISA.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.