Microsoft Copilot could have been hacked by some very low-tech methods

Copilot imagery from Microsoft
(Image credit: Microsoft)

Cybersecurity researchers have found a way to force Microsoft 365 Copilot to harvest sensitive data such as passwords, and send them to malicious third parties using “ASCII smuggling”

The ASCII smuggling attack required three things: Copilot for Microsoft 365 reading the contents of an email, or an attached document; having access to additional programs, such as Slack; and being able to “smuggle” the prompt with “special Unicode characters that mirror ASCII but are actually not visible in the user interface.”

As the researchers at Embrace the Red, who found the flaw, explain, Microsoft 365 Copilot can be told to read, and analyze, the contents of incoming email messages and attachments. If that email, or attachment, tells Microsoft 365 Copilot to look for passwords, email addresses, or other sensitive data in Slack, or elsewhere, it will do as it’s told.

Hidden prompts and invisible texts

Ultimately, if such a malicious prompt is hidden in an attachment, or email, via special Unicode characters that render it invisible to the victim, they may end up, unknowingly, telling their AI chatbot to hand over sensitive data to malicious third parties.

To prove their point, the researchers shared exploit demos with Microsoft, showcasing how sensitive data, such as sales number and multi-factor authentication (MFA) codes, can be exfiltrated and then decoded.

“An email is not the only delivery method for such an exploit. Force sharing documents or RAG retrieval can similarly be used as prompt injection angles,” the report concludes.

In the paper, the researchers recommended Copilot 365 stops interpreting, or rendering, Unicode Tags Code Points.

“Rendering of clickable hyperlinks will enable phishing and scamming (as well as data exfil),” the report concludes. “Automatic Tool Invocation is problematic as long as there are no fixes for prompt injection as an adversary can invoke tools that way and (1) bring sensitive information into the prompt context and (2) probably also invoke actions.”

Microsoft has since addressed the issue.

More from TechRadar Pro

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.