Artificial Intelligence

Ray AI Framework Vulnerability Exploited to Hack Hundreds of Clusters

Disputed Ray AI framework vulnerability exploited to steal information and deploy cryptominers on hundreds of clusters.

AI security

Attackers have been exploiting a missing authentication vulnerability in the Ray AI framework to compromise hundreds of clusters, application security firm Oligo reports.

The issue, tracked as CVE-2023-48022 and disclosed in November 2023, exists because, in its default configuration, the open source compute framework for AI does not enforce authentication and does not support any type of authorization model.

Attackers can exploit the flaw via Ray’s job submission API by submitting arbitrary system commands, allowing them to access all notes in the cluster and retrieve credentials.

According to Anyscale, which maintains the Ray framework, the lack of authentication is intentional, as users are responsible for enforcing security and isolation outside the cluster.

“The remaining CVE (CVE-2023-48022) – that Ray does not have authentication built in – is a long-standing design decision based on how Ray’s security boundaries are drawn and consistent with Ray deployment best practices,” Anyscale said in November.

The maintainers say they do plan to offer authentication in a future version of Ray, but the vulnerability remains ‘disputed’ for now, and unpatched. According to a NIST NVD advisory, CVE-2023-48022 has a CVSS score of 9.8.

While Anyscale calls for shared responsibility when securing Ray clusters, cybercriminals have taken notice of the framework’s lack of authentication enforcement and have been exploiting it since at least September 2023, two months before the issue was publicly disclosed.

[ Learn more about AI security at SecurityWeek’s AI Risk Summit ]

Advertisement. Scroll to continue reading.

Now, Oligo says it has observed hundreds of Ray clusters being hacked via this bug, with the attackers stealing a trove of information, including AI production workload data, database credentials, password hashes, SSH keys, and OpenAI, HuggingFace, and Stripe tokens.

Furthermore, many of the clusters ran with root privileges, providing access to sensitive cloud services, potentially leaking sensitive information, including customer data. The compromised clusters also exposed Kubernetes API access and Slack tokens.

Oligo, which has named the attack campaign ShadowRay, discovered that most of the compromised clusters were infected with cryptominers, including XMRig, NBMiner, and Java-based Zephyr miners, and reverse shells for persistent access.

“The first crypto-miner we noticed was installed on Feb. 21, 2024. We discovered that the IP has been accepting connections to the target port since Sept. 5, 2023, indicating the breach might have started before the vulnerability was disclosed. Due to the scale of the attacks and the chain of events, we believe the threat actors are probably part of a well-established hacking group,” Oligo says.

The security firm also notes that the attackers managed to evade detection by leveraging the Interactsh open source service for connection requests, and due to the exploited vulnerability being disputed, meaning that organizations are not even aware that they are at risk.

Update: In light of the malicious activity uncovered by Oligo, Anyscale announced the release of a client-side script and server-side code to help users identify Ray deployments with potentially exposed ports. However, the tooling is not guaranteed to identify all exposed ports and “does not attempt to validate what is running on the identified open port”.

Related: Shadow AI – Should I be Worried?

Related: Cloudflare Introduces AI Security Solutions

Related: Microsoft Releases Red Teaming Tool for Generative AI

Related Content

Artificial Intelligence

Israeli AI security firm Apex has received $7 million in seed funding for its detection, investigation, and response platform.

Artificial Intelligence

Japan's Prime Minister unveiled an international framework for regulation and use of generative AI, adding to global efforts on governance for the rapidly advancing...

Vulnerabilities

CISA says a critical GitLab password reset flaw is being exploited in attacks and roughly 1,400 servers have not been patched.

Artificial Intelligence

AI-Native Trust, Risk, and Security Management (TRiSM) startup DeepKeep raises $10 million in seed funding.

Artificial Intelligence

Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

Artificial Intelligence

CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s...

Artificial Intelligence

New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.

Incident Response

Palo Alto Networks has shared remediation instructions for organizations whose firewalls have been hacked via CVE-2024-3400.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version