You’ve come to the right place to prevent Remote Copilot Execution (RCE) and promptware
Whether its end users interacting with an enterprise copilot like Copilot for M365, or building their own on Copilot Studio, AI gains sweeping access to your data on your behalf, to be used at its discretion
Nearly every large enterprise already leverages Copilot, giving business users access to corporate data, and over 10,000 organizations use Copilot Studio to enable anyone to build their own copilots
AppSec tools focused on code scanning can’t help address the new attack surface that AI introduces, and least privilege and data classification controls are easily circumvented
When bad actors interact with copilots they can trick it into giving up control… and data with malicious prompts leading to remote copilot execution and promptware
RAG poisoning and RCE attacks allow for bad actors to remotely control copilots. Given copilot’s vast access, this effectively means compromising employee accounts via something as simple as sending them an email. These attacks can poison datasets, intercept prompts and gain access to huge amounts of sensitive data and identities. We see this playing out in the below scenarios:
Automate spear phishing to find Copilot collaborators, find interactions, and craft responses to get someone to click a malicious link
An external hacker gets an RCE on the copilot interaction of a user in the finance department right before an earnings call.
An external hacker gets an RCE to have Copilot provide users with the attacker’s malicious phishing site, when the user asks for navigation guidance
If you’re looking to kickstart your enterprise copilot security program, schedule a free assessment now!