Vulnerabilities

OpenAI Patches Account Takeover Vulnerabilities in ChatGPT

OpenAI resolved severe ChatGPT vulnerabilities that could have been exploited to take over accounts.

OpenAI resolved severe ChatGPT vulnerabilities that could have been exploited to take over accounts.

Last week, ChatGPT creator OpenAI patched multiple severe vulnerabilities that could have allowed attackers to take over user accounts and view chat histories.

The first was a critical web cache deception bug that could have allowed attackers to access user information such as names, emails, and access tokens, which OpenAI’s API would fetch from the server.

To exploit the vulnerability, an attacker could craft a .css path to the session endpoint and send the link to the victim. When the victim opens the link, the response is cached and the attacker can harvest the victim’s credentials and take over their account.

Reported by Shockwave CEO and founder Gal Nagli, the bug was quickly addressed by instructing “the caching server to not catch the endpoint through a regex”.

The fix, however, was not enough to keep an attacker out of user accounts, security researcher and CISO Ayoub Fathi explains. While analyzing the fix, he discovered a bypass method that could be used against another ChatGPT API, providing an attacker with access to a user’s conversation titles.

This was basically another web cache deception attack: the API response to a forged ‘/backend-api/conversations’ link would be cached, revealing the victim’s HTTP response, which contains the conversations’ titles.

Digging deeper, the researcher was able to bypass OpenAI’s fix for the original account takeover issue, using a new payload, and discovered that all ChatGPT APIs were vulnerable to the bypass, allowing an attacker to read conversation titles, full chats, and account status.

Fathi says he worked with the OpenAI team to help them fully address all issues.

Advertisement. Scroll to continue reading.

No bug bounty reward was issued to either researcher, as OpenAI does not have a bug bounty program in place.

The vulnerabilities were reported days after OpenAI took ChatGPT offline to address a vulnerability in an open-source Redis client library, which allowed users to view other users’ chat data and payment-related information.

Related: ChatGPT Data Breach Confirmed as Security Firm Warns of Vulnerable Component Exploitation

Related: Microsoft Puts ChatGPT to Work on Automating Cybersecurity

Related: ChatGPT and the Growing Threat of Bring Your Own AI to the SOC

Related Content

Artificial Intelligence

When not scamming other criminals, criminals are concentrating on the use of mainstream AI products rather than developing their own AI systems.

Artificial Intelligence

Software developers relying on AI chatbots for building applications may end up using hallucinated software packages.

Artificial Intelligence

Artificial intelligence computing giant NVIDIA patches flaws in ChatRTX for Windows and warns of code execution and data tampering risks.

Artificial Intelligence

Three types of vulnerabilities related to ChatGPT plugins could have led to data exposure and account takeovers. 

Artificial Intelligence

Microsoft threat hunters say foreign APTs are interacting with OpenAI’s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.

Artificial Intelligence

Prompt Security emerges from stealth with $5 million in seed to help businesses with generative-AI security tasks.

Artificial Intelligence

SecurityWeek interviews a wide spectrum of security experts on AI-driven cybersecurity use-cases that are worth immediate attention.

Artificial Intelligence

A researcher has shown how malicious actors can create custom GPTs that can phish for credentials and exfiltrate them to external servers. 

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version