Artificial Intelligence New Jailbreak Technique Uses Fictional World to Manipulate AI Cato Networks discovers a new LLM jailbreak technique that relies on creating a fictional world to bypass a model’s security controls. Ionut Arghire4 days ago
Artificial Intelligence New CCA Jailbreak Method Works Against Most AI Models Two Microsoft researchers have devised a new jailbreak method that bypasses the safety mechanisms of most AI systems. Ionut ArghireMarch 14, 2025
Artificial Intelligence DeepSeek Compared to ChatGPT, Gemini in AI Jailbreak Test DeepSeek’s susceptibility to jailbreaks has been compared by Cisco to other popular AI models, including from Meta, OpenAI and Google. Eduard KovacsFebruary 4, 2025
Artificial Intelligence DeepSeek Security: System Prompt Jailbreak, Details Emerge on Cyberattacks Researchers found a jailbreak method that exposed DeepSeek’s system prompt, while others have analyzed the DDoS attacks aimed at the new gen-AI. Eduard KovacsFebruary 3, 2025
Artificial Intelligence ChatGPT, DeepSeek Vulnerable to AI Jailbreaks Different research teams have demonstrated jailbreaks against ChatGPT, DeepSeek, and Alibaba’s Qwen AI models. Eduard KovacsJanuary 31, 2025
Artificial Intelligence DeepSeek Blames Disruption on Cyberattack as Vulnerabilities Emerge China’s DeepSeek blamed sign-up disruptions on a cyberattack as researchers started finding vulnerabilities in the R1 AI model. Eduard KovacsJanuary 28, 2025
Artificial Intelligence ChatGPT Jailbreak: Researchers Bypass AI Safeguards Using Hexadecimal Encoding and Emojis New jailbreak technique tricked ChatGPT into generating Python exploits and a malicious SQL injection tool. Eduard KovacsOctober 29, 2024
Artificial Intelligence ‘Deceptive Delight’ Jailbreak Tricks Gen-AI by Embedding Unsafe Topics in Benign Narratives Deceptive Delight is a new AI jailbreak that has been successfully tested against eight models with an average success rate of 65%. Eduard KovacsOctober 24, 2024
Artificial Intelligence Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique Microsoft has tricked several gen-AI models into providing forbidden information using a jailbreak technique named Skeleton Key. Eduard KovacsJune 28, 2024
Cybercrime In Other News: China Blames NSA for Hack, AI Jailbreaks, Netography Spin-Off Noteworthy stories that might have slipped under the radar: China blames NSA for a cyberattack, AI jailbreaks, and Netography spin-off. SecurityWeek NewsSeptember 15, 2023