CONFERENCE On Demand: Cyber AI & Automation Summit - Watch Now
Connect with us

Hi, what are you looking for?


Artificial Intelligence

Over a Dozen Exploitable Vulnerabilities Found in AI/ML Tools

Bug hunters uncover over a dozen exploitable vulnerabilities in tools used to build chatbots and other types of AI/ML models.

Since August 2023, members of the Huntr bug bounty platform for artificial intelligence (AI) and machine learning (ML) have uncovered over a dozen vulnerabilities exposing AI/ML models to system takeover and sensitive information theft.

Identified in tools with hundreds of thousands or millions of downloads per month, such as H2O-3, MLflow, and Ray, these issues potentially impact the entire AI/ML supply chain, says Protect AI, which manages Huntr.

A low-code machine learning platform, H2O-3 supports the creation and deployment of ML models via a web interface, by just importing data. It allows users to upload Java objects remotely via API calls.

By default, the installation is exposed to the network and does not require authentication, thus allowing attackers to supply malicious Java objects that H2O-3 would execute, allowing them to access the operating system.

Tracked as CVE-2023-6016 (CVSS score of 10), the remote code execution (RCE) vulnerability could allow attackers to completely take over the server and steal models, credentials, and other data.

The bug hunters uncovered two other critical issues in the low-code service, namely a local file include flaw (CVE-2023-6038) and a cross-site scripting (XSS) bug (CVE-2023-6013), along with a high-severity S3 bucket takeover vulnerability (CVE-2023-6017).

MLflow, an open-source platform for the management of the end-to-end ML lifecycle, also lacks authentication by default, and the researchers identified four critical vulnerabilities in it.

The most severe of these are arbitrary file write and patch traversal bugs (CVE-2023-6018 and CVE-2023-6015, CVSS score of 10) that can allow an unauthenticated attacker to overwrite arbitrary files on the operating system and achieve RCE.

Advertisement. Scroll to continue reading.

The tool was also found vulnerable to critical-severity arbitrary file inclusion (CVE-2023-1177) and authentication bypass (CVE-2023-6014) vulnerabilities.

The Ray project, an open-source framework for the distributed training of ML models, also lacks default authentication.

A critical code injection flaw in Ray’s cpu_profile format parameter (CVE-2023-6019, CVSS score of 10) could lead to full system compromise. The parameter was not validated before being inserted in a system command that was executed in a shell.

The bug hunters also identified two critical local file include issues that could allow remote attackers to read any files on the Ray system. The security defects are tracked as CVE-2023-6020 and CVE-2023-6021.

All vulnerabilities were reported to vendors at least 45 days prior to public disclosure. Users are advised to update their installations to the latest non-vulnerable versions and restrict access to the applications where patches are not available. 

Related: The Good, the Bad and the Ugly of Generative AI

Related: OpenAI Patches Account Takeover Vulnerabilities in ChatGPT

Related: Major ChatGPT Outage Caused by DDoS Attack

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join us as we delve into the transformative potential of AI, predictive ChatGPT-like tools and automation to detect and defend against cyberattacks.


As cybersecurity breaches and incidents escalate, the cyber insurance ecosystem is undergoing rapid and transformational change.


Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...