Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Application Security

Critical Flaw in AI Python Package Can Lead to System and Data Compromise

A critical vulnerability tracked as CVE-2024-34359 and dubbed Llama Drama can allow hackers to target AI product developers.

Llama Drama vulnerability CVE-2024-34359

A critical vulnerability discovered recently in a Python package used by AI application developers can allow arbitrary code execution, putting systems and data at risk.

The issue, discovered by researcher Patrick Peng (aka retr0reg), is tracked as CVE-2024-34359 and it has been dubbed Llama Drama. Cybersecurity firm Checkmarx on Thursday published a blog post describing the vulnerability and its impact.

CVE-2024-34359 is related to the Jinja2 template rendering Python tool, which is mainly used for generating HTML, and the llama_cpp_python package, which is used for integrating AI models with Python.

Llama_cpp_python uses Jinja2 for processing model metadata, but failed to use certain safeguards, enabling template injection attacks.

“The core issue arises from processing template data without proper security measures such as sandboxing, which Jinja2 supports but was not implemented in this instance,” Checkmarx explained.

According to the security firm, the vulnerability can be exploited for arbitrary code execution on systems that use the affected Python package. The company found that more than 6,000 AI models on the Hugging Face AI community that use llama_cpp_python and Jinja2 are impacted. 

“Imagine downloading a seemingly harmless AI model from a trusted platform like Hugging Face, only to discover that it has opened a backdoor for attackers to control your system,” Checkmarx said.

The vulnerability has been patched with the release of llama_cpp_python 0.2.72.

Advertisement. Scroll to continue reading.

Related: Ray AI Framework Vulnerability Exploited to Hack Hundreds of Clusters

Related: Criminal Use of AI Growing, But Lags Behind Defenders

Related: Five Eyes Agencies Release New AI Security Guidance

Written By

Eduard Kovacs (@EduardKovacs) is a managing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

The AI Risk Summit brings together security and risk management executives, AI researchers, policy makers, software developers and influential business and government stakeholders.

Register

People on the Move

Retired U.S. Army General and former NSA Director Paul M. Nakasone has joined the Board of Directors at OpenAI.

Jill Passalacqua has been appointed Chief Legal Officer at autonomous security solutions provider Horizon3.ai.

Cisco has appointed Sean Duca as CISO and Practice Leader for the APJC region.

More People On The Move

Expert Insights