Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Application Security

Critical Flaw in AI Python Package Can Lead to System and Data Compromise

A critical vulnerability tracked as CVE-2024-34359 and dubbed Llama Drama can allow hackers to target AI product developers.

Llama Drama vulnerability CVE-2024-34359

A critical vulnerability discovered recently in a Python package used by AI application developers can allow arbitrary code execution, putting systems and data at risk.

The issue, discovered by researcher Patrick Peng (aka retr0reg), is tracked as CVE-2024-34359 and it has been dubbed Llama Drama. Cybersecurity firm Checkmarx on Thursday published a blog post describing the vulnerability and its impact.

CVE-2024-34359 is related to the Jinja2 template rendering Python tool, which is mainly used for generating HTML, and the llama_cpp_python package, which is used for integrating AI models with Python.

Llama_cpp_python uses Jinja2 for processing model metadata, but failed to use certain safeguards, enabling template injection attacks.

“The core issue arises from processing template data without proper security measures such as sandboxing, which Jinja2 supports but was not implemented in this instance,” Checkmarx explained.

According to the security firm, the vulnerability can be exploited for arbitrary code execution on systems that use the affected Python package. The company found that more than 6,000 AI models on the Hugging Face AI community that use llama_cpp_python and Jinja2 are impacted. 

Advertisement. Scroll to continue reading.

“Imagine downloading a seemingly harmless AI model from a trusted platform like Hugging Face, only to discover that it has opened a backdoor for attackers to control your system,” Checkmarx said.

The vulnerability has been patched with the release of llama_cpp_python 0.2.72.

Related: Ray AI Framework Vulnerability Exploited to Hack Hundreds of Clusters

Related: Criminal Use of AI Growing, But Lags Behind Defenders

Related: Five Eyes Agencies Release New AI Security Guidance

Written By

Eduard Kovacs (@EduardKovacs) is senior managing editor at SecurityWeek. He worked as a high school IT teacher before starting a career in journalism in 2011. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

With "Shadow AI" usage becoming prevalent in organizations, learn how to balance the need for rapid experimentation with the rigorous controls required for enterprise-grade deployment.

Register

Delve into big-picture strategies to reduce attack surfaces, improve patch management, conduct post-incident forensics, and tools and tricks needed in a modern organization.

Register

People on the Move

Chris Sistrunk has been promoted to Practice Leader for Mandiant's OT Security Consulting.

Nudge Security has appointed Patrick Dillon as its Chief Revenue Officer.

AutoNation has appointed Brian Fricke as Chief Information Security Officer.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.