Over the past month, members of the Huntr bug bounty platform for artificial intelligence (AI) and machine learning (ML) have identified multiple severe vulnerabilities in popular solutions such as MLflow, ClearML, and Hugging Face.
With a CVSS score of 10, the most severe of the identified issues are four critical issues in MLflow, a platform for streamlining ML development that offers a set of APIs supporting existing ML applications and libraries.
One of the issues, CVE-2023-6831, is described as a path traversal bug rooted in the deletion of artifacts, an operation during which the path is normalized before use, allowing an attacker to bypass validation checks and delete any file on the server.
The second vulnerability, CVE-2024-0520, exists in the mlflow.data module, which can be abused with crafted datasets to generate a file path without sanitization, allowing an attacker to access information or overwrite files and potentially achieve remote code execution (RCE).
The third critical flaw, CVE-2023-6977, is described as a path validation bypass that could allow attackers to read sensitive files on the server, while the fourth, CVE-2023-6709, could lead to remote code execution when loading a malicious recipe configuration.
All four vulnerabilities were resolved in MLflow 2.9.2, which also patches a high-severity server-side request forgery (SSRF) bug that could allow an attacker to access internal HTTP(S) servers and potentially achieve RCE on the victim machine.
Another critical-severity flaw was identified in Hugging Face Transformers, which provides tools for building ML applications.
The issue, CVE-2023-7018, exists because no restrictions were implemented in a function used for the automatic loading of vocab.pkl files from a remote repository, which could allow attackers to load a malicious file and achieve RCE. Transformers version 4.36 resolves the vulnerability.
The members of the Huntr community also identified a high-severity stored cross-site scripting (XSS) flaw in ClearML, an end-to-end platform for automating ML experiments in a unified environment.
Tracked as CVE-2023-6778, the issue was identified in the Markdown editor component of the Project Description and Reports sections, which allows for the injection of malicious XSS payloads if unfiltered data is passed to it, potentially leading to user account compromise.
Protect AI, which has not made public details on a critical-severity Paddle command injection bug (CVE-2024-0521), says that all vulnerabilities were reported to project maintainers 45 days prior to the publication of their report.
Related: NIST: No Silver Bullet Against Adversarial Machine Learning Attacks
Related: Major Organizations Using ‘Hugging Face’ AI Tools Put at Risk by Leaked API Tokens
Related: Over a Dozen Exploitable Vulnerabilities Found in AI/ML Tools