Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Critical Vulnerabilities Found in Open Source AI/ML Platforms

Security researchers flag multiple severe vulnerabilities in open source AI/ML solutions MLflow, ClearML, Hugging Face.

Over the past month, members of the Huntr bug bounty platform for artificial intelligence (AI) and machine learning (ML) have identified multiple severe vulnerabilities in popular solutions such as MLflow, ClearML, and Hugging Face.

With a CVSS score of 10, the most severe of the identified issues are four critical issues in MLflow, a platform for streamlining ML development that offers a set of APIs supporting existing ML applications and libraries.

One of the issues, CVE-2023-6831, is described as a path traversal bug rooted in the deletion of artifacts, an operation during which the path is normalized before use, allowing an attacker to bypass validation checks and delete any file on the server.

The second vulnerability, CVE-2024-0520, exists in the mlflow.data module, which can be abused with crafted datasets to generate a file path without sanitization, allowing an attacker to access information or overwrite files and potentially achieve remote code execution (RCE).

The third critical flaw, CVE-2023-6977, is described as a path validation bypass that could allow attackers to read sensitive files on the server, while the fourth, CVE-2023-6709, could lead to remote code execution when loading a malicious recipe configuration.

All four vulnerabilities were resolved in MLflow 2.9.2, which also patches a high-severity server-side request forgery (SSRF) bug that could allow an attacker to access internal HTTP(S) servers and potentially achieve RCE on the victim machine.

Another critical-severity flaw was identified in Hugging Face Transformers, which provides tools for building ML applications.

The issue, CVE-2023-7018, exists because no restrictions were implemented in a function used for the automatic loading of vocab.pkl files from a remote repository, which could allow attackers to load a malicious file and achieve RCE. Transformers version 4.36 resolves the vulnerability.

Advertisement. Scroll to continue reading.

The members of the Huntr community also identified a high-severity stored cross-site scripting (XSS) flaw in ClearML, an end-to-end platform for automating ML experiments in a unified environment.

Tracked as CVE-2023-6778, the issue was identified in the Markdown editor component of the Project Description and Reports sections, which allows for the injection of malicious XSS payloads if unfiltered data is passed to it, potentially leading to user account compromise.

Protect AI, which has not made public details on a critical-severity Paddle command injection bug (CVE-2024-0521), says that all vulnerabilities were reported to project maintainers 45 days prior to the publication of their report.

Related: NIST: No Silver Bullet Against Adversarial Machine Learning Attacks

Related: Major Organizations Using ‘Hugging Face’ AI Tools Put at Risk by Leaked API Tokens

Related: Over a Dozen Exploitable Vulnerabilities Found in AI/ML Tools

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Gain valuable insights from industry professionals who will help guide you through the intricacies of industrial cybersecurity.

Register

Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.

Register

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...