Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

ChatGPT’s Calendar Integration Can Be Exploited to Steal Emails

Researchers show how a crafted calendar invite can trigger ChatGPT to exfiltrate sensitive emails.

ChatGPT attack

A new ChatGPT calendar integration can be abused to execute an attacker’s commands, and researchers at AI security firm EdisonWatch have demonstrated the potential impact by showing how the method can be leveraged to steal a user’s emails.

EdisonWatch founder Eito Miyamura revealed over the weekend that his company has analyzed ChatGPT’s newly added Model Context Protocol (MCP) tool support, which enables the gen-AI service to interact with a user’s email, calendar, payment, enterprise collaboration, and other third-party services. 

Miyamura showed in a demo how an attacker could exfiltrate sensitive information from a user’s email account simply by knowing the target’s email address. 

The attack starts with a specially crafted calendar invitation sent by the attacker to the target. The invitation contains what Miyamura described as a ‘jailbreak prompt’ that instructs ChatGPT to search for sensitive information in the victim’s inbox and send it to an email address specified by the attacker.

The victim does not need to accept the attacker’s calendar invite to trigger the malicious ChatGPT commands. Instead, the attacker’s prompt is initiated when the victim asks ChatGPT to check their calendar and help them prepare for the day.

These types of AI attacks are not uncommon and they are not specific to ChatGPT. SafeBreach last month demonstrated a similar calendar invite attack targeting Gemini and Google Workspace. The security firm’s researchers showed how an attacker could conduct spamming and phishing, delete calendar events, learn the victim’s location, remotely control home appliances, and exfiltrate emails.

Advertisement. Scroll to continue reading.

Zenity also showed last month how integration between AI assistants and enterprise tools can be exploited for various purposes. The AI security startup shared examples of attacks targeting ChatGPT, Copilot, Cursor, Gemini, and Salesforce Einstein. 

EdisonWatch’s demonstration is the first to target the newly released ChatGPT calendar integration. The research is noteworthy for how the agent fetches and executes calendar content through tool calls, which can amplify impact across connected systems. But, “it is not unique to OpenAI,” Miyamura explained. 

Because it’s a known class of vulnerabilities related to LLM integration and it’s not specific to ChatGPT, the findings have not been reported to OpenAI. AI companies are typically aware that these types of attacks are possible.

In the case of the ChatGPT attack demonstrated by EdisonWatch, the abused feature is currently only available in developer mode and the user needs to manually approve the AI chatbot’s actions. On the other hand, Miyamura pointed out that even if the attack requires victim interaction it could still be useful for threat actors.

“Decision fatigue is a real thing, and normal people will just trust the AI without knowing what to do and click approve, approve, approve,” Miyamura said.

EdisonWatch, founded by a team of Oxford computer science alumni, focuses on monitoring and enforcing company policy-as-code for AI interactions with company software and systems of record in an effort to help organisations scale AI pilots safely and securely. 

The security firm has released version 1 of an open source solution designed to mitigate the most common types of AI attacks, helping secure integrations and reducing the risk of data exfiltration. 

Related: UAE’s K2 Think AI Jailbroken Through Its Own Transparency Features

Related: How to Close the AI Governance Gap in Software Development

Written By

Eduard Kovacs (@EduardKovacs) is senior managing editor at SecurityWeek. He worked as a high school IT teacher before starting a career in journalism in 2011. Eduard holds a bachelor’s degree in industrial informatics and a master’s degree in computer techniques applied in electrical engineering.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

With "Shadow AI" usage becoming prevalent in organizations, learn how to balance the need for rapid experimentation with the rigorous controls required for enterprise-grade deployment.

Register

Delve into big-picture strategies to reduce attack surfaces, improve patch management, conduct post-incident forensics, and tools and tricks needed in a modern organization.

Register

People on the Move

AutoNation has appointed Brian Fricke as Chief Information Security Officer.

Varun Kohli has joined GetReal Security as Chief Marketing Officer.

MongoDB has appointed Doug Bowers as Chief Information Security Officer.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.