Microsoft Fights Silent AI Attacks with Proactive Testing

Microsoft has introduced a new AI security testing platform aimed at identifying vulnerabilities in enterprise email systems, following the recent discovery of a critical flaw in Microsoft 365 Copilot.
The exploit, named “EchoLeak”, was uncovered by cybersecurity firm Aim Security. It allowed attackers to extract internal company data simply by sending a specially crafted email without any interaction from the recipient.
EchoLeak Exposed: Microsoft Hardens Copilot AI
Copilot, in processing the message, would automatically retrieve documents or messages and embed sensitive data into image links or markdown content.
These elements could then be accessed via trusted Microsoft domains like SharePoint and Teams.
Cataloged as CVE‑2025‑32711 and rated critical (CVSS 9.3), EchoLeak is considered the first-ever zero-click vulnerability in a commercial AI assistant.
“This incident highlights the growing risks of AI tools that operate within communication channels,” said cybersecurity analyst David Marcus. “No user action was needed just a message was enough.”
Microsoft responded swiftly with a server-side patch in May 2025, confirming that no customer data was compromised. To bolster defenses, the company also introduced several key measures:
Blocking external email content ingestion
Strengthening prompt filtering
Reinforcing data access boundaries within Copilot
In parallel, Microsoft launched an AI vulnerability assessment platform, enabling internal teams to simulate real world attacks, analyze system behavior, and fix weak points proactively before they can be exploited in the wild.
The move reflects a broader industry realization: as generative AI becomes embedded in everyday workflows, ensuring security and control is just as critical as performance and productivity.