Can Microsoft Copilot Into a Phishing Tool?
A recent conference highlighted how the AI-based Copilot system running on Microsoft 365 applications could be manipulated by cybercriminals. The picture of the future did appear bleak but for the fact that users are still getting used to the AI smarts of the application
Ever since generative AI assumed pride of place in our day-to-day technology lexicon, most companies, both large and small, have been racing against each other to establish their stamp of authority in building use-cases. Microsoft included this technology into its Copilot AI system to get answers from emails, chats, and files in order to enhance productivity.
However, now researchers are worried that the very same system that helped users become more productive may also render them more susceptible to cyberattacks that could not only open up secure data to external elements but generate false references to files and do so by dodging Microsoft’s data protection protocols.
Your Word Document could turn into a phishing machine
The Black Hat security conference held at Las Vegas last week saw researcher Michael Bargury demonstrating five ways that Copilot apps such as Word could be manipulated. What was alarming was to be shown that it was possible to turn artificial intelligence (AI) into an automatic phishing machine.
Bargury, who is the co-founder and CTO at Zenity took to his LinkedIn page to inform us that the company had demonstrated at Black Hat how a single email could allow attackers to take full control over Copilot on Microsoft 365. “This marks the first demonstrated attack on enterprise AI, with concrete implications on the security of most enterprises today,” he said.
He explained that attackers won’t require prior access or knowledge of a user’s system but would require to send a single email or a Microsoft Teams message or a calendar invite and then Poof! “This is not a vulnerability to be mitigated but rather a class of vulnerabilities to be managed, exploited by malicious content we call PromptWare,” he says.
The post informed readers that while video recordings of the event weren’t made available to everyone yet, the company had gone ahead and published an article to clarify misconceptions and propose a path forward. It also noted that the company continued to work closely with Microsoft security teams.
What did the demonstration tell prospective users?
The red-teaming code of Bargury, called LOLCopilot, can use access someone’s work email and then use the generative AI tool to see who all mail them regularly, draft a message mimicking a user’s writing style and send a personalized blast that could include a malicious link or attached malware.
“I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf. A hacker would spend days crafting the right email to get you to click on it, but they can generate hundreds of these emails in a few minutes.” Says Bargury, whose company focuses on app security for enterprise copilots, low-code and no-code app development.
Other demonstrations created by Bargury and shared in an elaborate post on the company website a couple of days ago, uses LLMs as per their design whereby typing written questions to access data that can be retrieved by AI. However, the challenge is that it could produce malicious results by adding additional data or instructions to perform some actions.
Microsoft is working with the researcher to fix things
Therefore, the research sounds a warning note to enterprises who are looking to connect AI systems to corporate data. All that a hacker needs to do is ingest some untrusted external data into the mix to mess up the answers, especially to queries that generate results that appear legitimate.
Meanwhile, a report published by The Wired quotes Phillip Misner, head of AI incident detection and response at Microsoft to suggest that they were working with Bargury. He noted that risks of post-compromise abuse of AI were similar to other post-compromise techniques as well.
In a blog post on the company website, Bargury has the following advice for app builders around AI. He wants them to follow design patterns identified by the community as these could cut down on sharp edges of the problem, but reduce the usability of the application itself. (Read the entire blog post here).