HomeWinBuzzer NewsSecurity Flaws in Microsoft Copilot AI Exposed at Black Hat

Security Flaws in Microsoft Copilot AI Exposed at Black Hat

Researchers at Black Hat demonstrated how Microsoft's Copilot AI can be exploited to automate phishing attacks, steal data, and bypass security measures.

-

At the Black Hat security conference in Las Vegas, researchers demonstrated how Microsoft's Copilot AI could automate phishing scams, extract confidential data, and sidestep security barriers. Michael Bargury, cofounder and CTO of Zenity, showcased these vulnerabilities within , embedded in 365 apps.

Spear-Phishing Capabilities Exposed

According to Bargury's findings, an attacker who infiltrates a user's email could use Copilot to generate sophisticated spear-phishing emails. Mimicking the user's writing style and targeting known contacts, the AI can produce personalized emails rapidly, enhancing the deception significantly.

The researcher further demonstrated that Copilot could be tricked into divulging sensitive information like employee salaries without raising security alarms. Attackers can create precise prompts to direct the AI to withhold references to the originating files, thus evading Microsoft's data protection protocols. Additionally, the AI's database is vulnerable to being contaminated with malicious emails, which could lead to the AI feeding false data, including incorrect banking information.

Risk to Corporate Data

Introducing systems like Copilot into business environments introduces potential security problems. Such systems accessing corporate data can be susceptible to indirect prompt injection and database contamination.

Security professionals stress the need for thorough monitoring of AI outputs to ensure that AI actions are in line with user intentions. Phillip Misner, head of AI incident detection and response at Microsoft, acknowledged the identified flaws to The Register and assured ongoing efforts to address these issues.

At Black Hat, Bargury showcased five proof-of-concept exploits targeting . He pointed out that making a secure Copilot Studio bot is complex due to insecure default settings. His first presentation discussed Copilot Studio, a no-code tool for building custom enterprise bots. The second covered how attackers could abuse Copilot by accessing an organization's IT framework. Zenity provides security controls for Copilot and other enterprise assistants.

Copilot Studio enables non-technical users to create conversational bots using internal documents and data supported by retrieval-augmented generation (RAG). Research showed nearly 3,000 discoverable Copilot Studio bots in large enterprises, posing a risk of data exfiltration. Bargury's team found tens of thousands of bots were publicly accessible due to default configurations. Microsoft has rectified this issue for new setups, but existing ones remain vulnerable.

Bargury's team also developed CopilotHunter, a tool to find publicly accessible Copilot Studio bots and gather information. He warned that Copilot is prone to indirect prompt injection attacks, comparable to remote code execution in severity. Copilot can be manipulated to gain initial access and perform unwanted actions through tailored messages. Microsoft has been responsive, addressing several of Bargury's reported issues within their limits.

SourceZenity
Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

Mastodon