Artificial Intelligence New Jailbreak Technique Uses Fictional World to Manipulate AI Cato Networks discovers a new LLM jailbreak technique that relies on creating a fictional world to bypass a model’s security controls. Ionut ArghireMarch 21, 2025