Microsoft 365 Copilot was until recently vulnerable to an attack method that could have been leveraged by threat actors to obtain sensitive information, AI security firm Aim Security reported on Wednesday.
The zero-click attack, dubbed EchoLeak and involving a vulnerability tracked as CVE-2025-32711, enabled attackers to get Copilot to automatically exfiltrate potentially valuable information from a targeted user or organization without requiring user interaction.
Microsoft on Wednesday published an advisory for the vulnerability, which it described as ‘AI command injection in M365 Copilot’ and classified as ‘critical’, but informed customers that a patch has been implemented on the server side and no customer action is required.
Learn More About AI Vulnerabilities at SecurityWeek’s AI Risk Summit
The Microsoft 365 Copilot is a productivity assistant designed to enhance the way users interact with applications such as Word, PowerPoint and Outlook. Copilot can query emails, extracting and managing information from the user’s inbox.
The EchoLeak attack involves sending a specially crafted email to the targeted user. The email contains instructions for Copilot to collect secret and personal information from prior chats with the user and send them to the attacker’s server.
The user does not need to open the malicious email or click on any links. The exploit, which Aim Security described as indirect prompt injection, is triggered when the victim asks Copilot for information referenced in the malicious email. That is when Copilot executes the attacker’s instructions to collect information previously provided by the victim and send it to the attacker.
For example, the attacker’s email can reference employee onboarding processes, HR guides, or leave of absence management guides. When the targeted user asks Copilot about one of these topics, the AI will find the attacker’s email and execute the instructions it contains.
In order to execute an EchoLeak attack, the attacker has to bypass several security mechanisms, including cross-prompt injection attack (XPIA) classifiers designed to prevent prompt injection. XPIA is bypassed by phrasing the malicious email in a way that makes it seem as if it’s aimed at the recipient, without including any references to Copilot or other AI.

The attack also bypasses image and link redaction mechanisms, as well as Content Security Policy (CSP), which should prevent data exfiltration.
“This is a novel practical attack on an LLM application that can be weaponized by adversaries,” Aim Security explained. “The attack results in allowing the attacker to exfiltrate the most sensitive data from the current LLM context – and the LLM is being used against itself in making sure that the MOST sensitive data from the LLM context is being leaked, does not rely on specific user behavior, and can be executed both in single-turn conversations and multi-turn conversations.”
Aim Security pointed out that while it demonstrated the EchoLeak attack against Microsoft’s Copilot, the technique may work against other AI applications as well.
Related: The Root of AI Hallucinations: Physics Theory Digs Into the ‘Attention’ Flaw
Related: Going Into the Deep End: Social Engineering and the AI Flood
