Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

AI Data Exposed to ‘LeftoverLocals’ Attack via Vulnerable AMD, Apple, Qualcomm GPUs

Researchers show how a new attack named LeftoverLocals, which impacts GPUs from AMD, Apple and Qualcomm, can be used to obtain AI data.

LeftoverLocals GPU attack

Researchers have demonstrated how a new attack method leveraging a vulnerability in graphics processing units (GPUs) could be exploited to obtain potentially sensitive information from AI and other types of applications.

The vulnerability, dubbed LeftoverLocals and officially tracked as CVE-2023-4969, was discovered by researchers at cybersecurity firm Trail of Bits. The company on Tuesday published a blog post detailing its findings, and an advisory was released the same day by the CERT Coordination Center. 

Tests conducted by Trail of Bits showed that Apple, AMD and Qualcomm GPUs are affected. In addition, the company learned that some GPUs from Imagination Technologies are impacted as well. Products from Arm, Intel, and Nvidia do not appear to be affected.  

Qualcomm and Apple have started releasing patches and AMD has published an advisory informing customers that it plans on releasing mitigations in March 2024, but noted that they will not be enabled by default.  

The LeftoverLocals vulnerability exists because some GPUs fail to properly isolate process memory. This could allow a local attacker to obtain sensitive information from a targeted application. For instance, a malicious app installed on the targeted device may be able to exploit the vulnerability to read GPU memory associated with another application, which could contain valuable data.

GPUs were originally developed for graphics acceleration, but they are now used for a wider range of applications, including artificial intelligence.   

“The attacker only requires the ability to run GPU compute applications, e.g., through OpenCL, Vulkan, or Metal. These frameworks are well-supported and typically do not require escalated privileges. Using these, the attacker can read data that the victim has left in the GPU local memory simply by writing a GPU kernel that dumps uninitialized local memory. These attack programs, as our code demonstrates, can be less than 10 lines of code,” Trail of Bits said, noting that launching such an attack is not difficult. 

The company’s researchers demonstrated how an attacker could use LeftoverLocals to create covert channels on iPhones, iPads and Android devices, and they also developed a proof-of-concept that shows how an attacker could listen in on the victim’s conversation with an LLM application, specifically an AI chatbot.

Advertisement. Scroll to continue reading.

They showed how an attacker could leverage leaked GPU memory to stealthily obtain the responses given by the chatbot to the user. 

“An attack program must be co-resident on the same machine and must be ‘listening’ at the same time that the victim is running a sensitive application on the GPU. This could occur in many scenarios: for example, if the attack program is co-resident with the victim on a shared cloud computer with a GPU,” Trail of Bits said. “On a mobile device, the attack could be implemented in an app or a library. Listening can be implemented efficiently, and thus can be done repeatedly and constantly with almost no obvious performance degradation.”

Related: Vulnerability Could Have Been Exploited for ‘Unlimited’ Free Credit on OpenAI Accounts

Related: Critical Vulnerability Found in Ray AI Framework 

Related: Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher

Written By

Eduard Kovacs (@EduardKovacs) is a managing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Gain valuable insights from industry professionals who will help guide you through the intricacies of industrial cybersecurity.

Register

Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.

Register

Expert Insights

Related Content

Vulnerabilities

Less than a week after announcing that it would suspended service indefinitely due to a conflict with an (at the time) unnamed security researcher...

Data Breaches

OpenAI has confirmed a ChatGPT data breach on the same day a security firm reported seeing the use of a component affected by an...

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

IoT Security

A group of seven security researchers have discovered numerous vulnerabilities in vehicles from 16 car makers, including bugs that allowed them to control car...

Vulnerabilities

A researcher at IOActive discovered that home security systems from SimpliSafe are plagued by a vulnerability that allows tech savvy burglars to remotely disable...

Risk Management

The supply chain threat is directly linked to attack surface management, but the supply chain must be known and understood before it can be...

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Cybercrime

Patch Tuesday: Microsoft calls attention to a series of zero-day remote code execution attacks hitting its Office productivity suite.