Artificial Intelligence

ChatGPT Plugin Vulnerabilities Exposed Data, Accounts

Three types of vulnerabilities related to ChatGPT plugins could have led to data exposure and account takeovers. 

ChatGPT attack

API security firm Salt Security has conducted an analysis of ChatGPT plugins and found several types of vulnerabilities that could have been exploited to obtain potentially sensitive data and take over accounts on third-party websites.

ChatGPT plugins enable users to access up-to-date information (rather than the relatively old data the chatbot was trained on), as well as to integrate ChatGPT with third-party services. For instance, plugins can allow users to interact with their GitHub and Google Drive accounts.

However, when a plugin is used, ChatGPT needs permission to send the user’s data to a website associated with the plugin, and the plugin may need access to the user’s account on the service it’s interacting with. 

The first vulnerability identified by Salt Security impacted ChatGPT directly and it was related to OAuth authentication. An attacker who tricked a victim into clicking on a specially crafted link could install a malicious plugin with their own credentials on the victim’s account, and the victim would not need to confirm the installation. 

This would result in any message typed by the victim, including messages that may include credentials and other sensitive data, being sent to the plugin and implicitly the attacker. 

The second vulnerability was found in the AskTheCode plugin developed by PluginLab.AI, which enables users to interact with their GitHub repositories. The security flaw could have allowed an attacker to take control of the victim’s GitHub account and gain access to their code repositories through a zero-click exploit. 

The third vulnerability was also related to OAuth and it was found to impact several plugins, but Salt demonstrated its findings on a plugin called Charts by Kesem AI. An attacker who could trick a user into clicking on a specially crafted link could have taken over the victim’s account that was associated with the plugin. 

The vulnerabilities were reported to OpenAI, PluginLab.AI and Kesem AI shortly after they were discovered in the summer of 2023 and the vendors rolled out patches at some point in the following months. 

Advertisement. Scroll to continue reading.

When Salt Security conducted its research, ChatGPT plugins represented the primary means of adding functionality and features to the LLM. In November, OpenAI announced that paying customers would be able to create their own GPTs that can be customized for specific topics or tasks. These GPTs are expected to replace plugins

On the other hand, Salt Security said it also found vulnerabilities in GPTs as well and it plans on detailing them in an upcoming blog post. Others have also found ways to abuse GPTs to obtain potentially valuable data. 

Related: Simple Attack Allowed Extraction of ChatGPT Training Data

Related: Cyber Insights 2024: Artificial Intelligence

Related: Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting

Related Content

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

Artificial Intelligence

CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s...

Artificial Intelligence

New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.

Artificial Intelligence

Five Eyes cybersecurity agencies have released joint guidance on securely deploying and operating AI systems. 

Artificial Intelligence

Startup Knostic emerges from stealth mode with $3.3 million in funding and a gen-AI access control product for enterprises.

Artificial Intelligence

Software developers relying on AI chatbots for building applications may end up using hallucinated software packages.

Artificial Intelligence

Artificial intelligence computing giant NVIDIA patches flaws in ChatRTX for Windows and warns of code execution and data tampering risks.

Artificial Intelligence

Microsoft announces that its Copilot for Security generative AI security solution will become generally available on April 1. 

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version