Last week, Michael Bargury and the team at Zenity published a video summarizing 6 vulnerabilities that are found in Microsoft Copilot Studio. The video highlights, in sequence, a myriad of ways that business users can create their own AI Copilots that are risky, why they are risky, and how they can be easily exploited. While I highly recommend checking out the video, this blog sets out to provide a look at why these vulnerabilities matter, and what considerations should be taken to mitigate them.
Poor Authentication Protocols Lead to Over-Sharing
After building a Copilot, builders can both publish them for broader use, as well as share them with other business users at the organization to help drive efficiency and productivity. The challenge for security teams is that the default setting makes it so that no authentication is required before other people can use this Copilot, meaning that it is publicly accessible to anyone with the link.
In the video, Michael cites an example of an HR app, but also think of a financial Copilot that’s been created to verify budget information. This Copilot, in order to provide timely and accurate information, likely needs to be connected to an internal Sharepoint site or SAP table that contains sensitive data pertaining to corporate budget. If anyone can simply log on to access a bot that has access to potentially sensitive data, it can lead to data leakage and/or a failed audit.
Further, when building a bot without authentication methods, the builder embeds their identity into the application, making it so that every time someone logs in or uses the bot, it appears that it is the builder. This results in a lack of visibility for security leaders, but also is a prime example of credential sharing, where any user that interacts with the bot essentially interacts with, and uses someone else’s credentials.
The Process of Mitigation
While it is relatively straightforward to identify and fix the issue, two things need to proactively happen. First, an authentication method must be chosen, and second, builders must select the box that requires users to log in. This whole process is very reminiscent of ‘opt-in’ vs. ‘opt-out’ that many businesses inject into processes that allows them to harvest data, send spammy emails, and more, unless the user goes out of their way to correct it (which many don’t).
While seemingly simple, in the high-velocity world of citizen development, it is very easy to make a mistake and/or forget. This is pronounced when less technical users are the ones building their own apps and bots, as they lack the training and awareness to think of potential security vulnerabilities.
To prevent this from happening, anyone using Copilot Studio should be instructed to double check the authentication method for a Copilot they are building. There are some instances where it makes sense to not require authentication (think of a customer service chatbot that pops up when you enter a new website), but in many cases, particularly those that require access to sensitive data, unchecked access is not needed.
AI Copilots are often targeted in prompt injection attacks, where in a direct attack, a hacker modifies a large language model’s input in an attempt to overwrite existing system prompts. In the example given, Michael shows what a prompt injection looks like in the video with a simple request for the bot to provide information about an impending layoff plan that is contained in a Sharepoint site. Even for verified employees, this type of information would not be something desirable to ‘get out,’ and even less desirable for a bot that has not been fortified with even basic authentication.
However, it becomes that much worse when unauthorized users access these bots with bad intentions. It becomes easy, especially when these bad actors are masquerading as trusted insiders, to ‘trick’ the bots to giving them sensitive information, which results in frequent and widespread data leaks. This can also, of course, lead to failures of compliance and data exfiltration.
Violation of Least Privilege
Finally, within Copilot Studio, in order to track performance and see the interactions with each individual bot, Microsoft provides access to transcripts in a shared table. However, for now, this is a shared table that is accessible by many, that contains all transcripts from all of the bots. While this is a clear violation of least privilege, it is done with good intentions, because people need to be able to verify that bots are working as designed, so they can tweak and optimize them. However, in this case, it provides a single place that contains all transcripts and engagements with each bot, resulting in a huge opportunity for data leakage.
For starters, many organizations are rushing to slow the use and adoption of AI tools, but in many cases, cannot be fully contained. At Zenity, we became the first company to offer support for business-led development of Enterprise AI Copilots just a couple of weeks ago and are already providing value to our customers in the way of:
- Maintaining Visibility. With Zenity, customers can continuously identify copilots as they are being introduced to the corporate environment. Zenity also tags sensitive data and correlates that with any copilot that has access to storing, processing, or otherwise handling data that should be tightly guarded.
- Assessing Risk. Each copilot and bot is also automatically scanned and analyzed to determine which bots lack proper authentication methods, are likely to leak data, are susceptible to prompt injections, and more.
- Governing Citizen Development. As citizen developers are prone to putting bots into production with errors, security leaders can autonomously resolve security and compliance issues and ensure that as more and more people use tools like Copilot Studio, that there are guardrails in place to ensure harmony.
If you’d like to see this in action, come chat with us!