Security researchers have exposed a critical vulnerability in Microsoft 365 Copilot, dubbed “EchoLeak,” that allowed attackers to automatically steal sensitive corporate data using a specially crafted email. The novel attack method required minimal user interaction and no explicit clicks, turning the AI assistant’s own powerful data-processing capabilities into a tool for exfiltration.
The discovery, detailed in a technical blog post by Aim Security, introduces a new class of AI-specific threat. Aim Security argues that this attack represents a new type of vulnerability, introducing the term “LLM Scope Violation” to describe a technique that may have “additional manifestations in other RAG-based chatbots and AI agents.” This technique manipulates a generative AI agent by feeding it malicious instructions hidden within what appears to be a harmless external input, tricking the agent into accessing and leaking privileged internal data.
Microsoft has since patched the vulnerability, which was assigned the identifier CVE-2025-32711 and included in its June 2025 Patch Tuesday release. While the company stated no customers were impacted by an active attack, the disclosure sends a stark warning to the industry about the inherent security challenges in the race to deploy increasingly autonomous AI agents across the enterprise. In its official advisory on the flaw, Microsoft confirmed the vulnerability allowed for “AI command injection” that could permit an “unauthorized attacker to disclose information over a network.”
Anatomy of a Sophisticated AI Heist
The EchoLeak exploit was a multi-stage chain that cleverly bypassed several of Microsoft’s key security guardrails. While initially described as a zero-click attack, a post by Varonis adds nuance, explaining that the attack flow requires a victim to eventually send a prompt to Copilot that semantically matches the attacker’s email content, making it a minimal-interaction exploit rather than a truly passive one.
The attack began with an email containing hidden instructions formatted using a specific Markdown syntax. Microsoft’s XPIA classifiers, designed to block prompt injection, were circumvented by phrasing the malicious instructions as if they were intended for a human recipient. To make the attack more effective, Aim Security detailed a weaponization technique called “RAG spraying,” where an email is filled with various topics to maximize the chance that a user’s future query will trigger the AI to retrieve the malicious content.
Once past these initial defenses, the exploit leveraged obscure variations of Markdown for reference-style images that were not properly redacted by Copilot. This allowed the creation of a URL designed to send data to an attacker’s server. To overcome browser-level security, the attackers found bypasses in Microsoft’s Content-Security-Policy (CSP) by routing the data exfiltration request through trusted Microsoft domains, specifically a SharePoint EmbedService
endpoint and a Teams content proxy.
A New Paradigm for Agentic AI Threats
The EchoLeak vulnerability highlights a fundamental challenge for any organization deploying AI systems built on Retrieval-Augmented Generation (RAG), a technique that allows AI to pull in real-time data to inform its responses. When an AI indiscriminately mixes untrusted external data with trusted internal data, the potential for compromise grows exponentially. This is one of several inherent risks in RAG architecture, which also include threats like data poisoning, where threat actors intentionally compromise training datasets used for AI training to influence or manipulate the model’s output or behavior.
The concept of an “LLM Scope Violation” can be compared to the early days of buffer overflow vulnerabilities in traditional software; the industry eventually developed specific terminology and defenses like “stack canaries” once those threats were properly understood. The core danger is that the attack leverages the permissions of the targeted user, meaning if an executive with broad data access is compromised, the AI can be turned into a powerful tool to find and exfiltrate the company’s most sensitive information.
According to Abhishek Anant Garg of QKS Group, corporate security teams struggle because their systems are designed to detect malicious code, not seemingly harmless language that has been weaponized. This sentiment underscores a growing concern that traditional security tools are ill-equipped to handle the nuances of agentic AI.
Microsoft’s Race to Secure a Rapidly Expanding Ecosystem
The disclosure of EchoLeak arrived just as Microsoft was in the midst of a massive strategic push to integrate AI agents across its entire product suite. Throughout the spring of 2025, the company announced its “Copilot Wave 2 Spring release” and declared the “Age of AI Agents” at its Build 2025 conference, unveiling a host of new tools for building and deploying them.
Microsoft has been publicly bolstering its security posture in recent years. At its Ignite 2024 conference, it introduced new options as part of the Copilot Control System, designed to give IT administrators granular control over agent creation and data access.
These efforts came as other researchers were also flagging potential issues, with a report from Pen Test Partners in May demonstrating how SharePoint Copilot could be manipulated to reveal sensitive data.
The EchoLeak incident underscores the immense pressure on tech giants to balance rapid innovation with robust security. The push to expand Copilot’s user base, which saw it integrated into consumer plans in January 2025, and to demonstrate user growth amid a reported stall earlier in the year, creates an environment where novel threats can emerge faster than defenses can be built. The EchoLeak vulnerability serves as a critical case study, proving that as AI becomes more capable and autonomous, the attack surface becomes more abstract and dangerous, demanding a fundamental rethinking of enterprise security.