BREAKING AT&T Data Breach: ‘Nearly All’ Wireless Customers Exposed in Massive Hack
Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Easily Exploitable Critical Vulnerabilities Found in Open Source AI/ML Tools

Protect AI warns of a dozen critical vulnerabilities in open source AI/ML tools reported via its bug bounty program.

A dozen critical vulnerabilities have been discovered in various open source AI/ML tools over the past few months, a new Protect AI report shows.

The AI security firm warns of a total of 32 security defects reported as part of its Huntr AI bug bounty program, including critical-severity issues that could lead to information disclosure, access to restricted resources, privilege escalation, and complete server takeover.

The most severe of these bugs is CVE-2024-22476 (CVSS score of 10), an improper input validation in Intel Neural Compressor software that could allow remote attackers to escalate privileges. The flaw was addressed in mid-May.

A critical-severity issue in ChuanhuChatGPT (CVE-2024-3234) that allowed attackers to steal sensitive files existed because the application used an outdated, vulnerable iteration of the Gradio open source Python package.

LoLLMs was found vulnerable to a path traversal protection bypass (CVE-2024-3429) leading to arbitrary file reading, which could be exploited to access sensitive data or cause a denial-of-service (DoS) condition.

Two critical-severity vulnerabilities in Qdrant (CVE-2024-3584 and CVE-2024-3829) could allow attackers to write and overwrite arbitrary files on the server, potentially enabling full takeover.

Lunary was found to allow users “to access projects via the API from an organization that they should not have authorization to access”. The issue is tracked as CVE-2024-4146.

Other critical-severity flaws researchers from the Huntr community discovered include: server-site request forgery (SSRF) in AnythingLLM, insecure direct object reference (IDOR) in Lunary, missing authorization and authentication mechanisms in Lunary, improper path sanitization in LoLLMs, path traversal in AnythingLLM, and log injection in the Nvidia Triton Inference Server for Linux.

Advertisement. Scroll to continue reading.

A dozen other high-severity vulnerabilities were identified and reported in LoLLMs, Lunary, AnythingLLM, Deep Java Library (DJL), Scrapy, and Gradio.

“It is important to note that all vulnerabilities were reported to the maintainers a minimum of 45 days prior to publishing this report, and we continue to work with maintainers to ensure a timely fix prior to publication,” Protect AI notes.

Related: Critical PyTorch Vulnerability Can Lead to Sensitive AI Data Theft

Related: Eight Vulnerabilities Disclosed in the AI Development Supply Chain

Related: Critical Vulnerabilities Found in Open Source AI/ML Platforms

Related: Beware – Your Customer Chatbot is Almost Certainly Insecure: Report

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Learn how to utilize tools, controls, and design models needed to properly secure cloud environments.

Register

Event: ICS Cybersecurity Conference

The leading industrial cybersecurity conference for Operations, Control Systems and IT/OT Security professionals to connect on SCADA, DCS PLC and field controller cybersecurity.

Register

People on the Move

ICS and OT cybersecurity solutions provider TXOne Networks appoints Stephen Driggers as new CRO

Identity orchestration provider Strata Identity appoints Aldo Pietropaolo as Field CTO

Cybersecurity provider for the aviation industry Cyviation has appointed Eliran Almog as Chief Executive Officer.

More People On The Move

Expert Insights