OpenAI’s GPT Store: What to Know

Many are speculating that at long last, OpenAI’s GPT store is set to go live this week. GPT builders and developers received an email on January 4th notifying them of the launch, which has been rumored for months, and likely only delayed due to the drama that has taken place at the company. This blog will summarize what this means for citizen development and how security teams should approach this new technological breakthrough from the AI giant. 

No-Code, No Problem 

In that email, OpenAI asked developers to check that their GPTs are compliant with OpenAI policies and guidelines. In this announcement was also news of the introduction of a no-code, plain language GPT builder. TechCrunch covered this news, while adding that building GPTs does not require any knowledge in code-writing, and can be as complex or simple as the builder desires. As an added incentive to spur submissions, OpenAI has even listed a number of popular AI assistants that they are encouraging people to build, in return for an unspecified revenue split. 

While this democratization of development is a tremendous push forward for innovation and technology, it raises a lot of questions about how secure these user-developed GPTs are and how much control OpenAI inherently has to cede in order to unfurl the newest addition to their growing platform. This is no different from other platforms, such as Power Platform, that introduced Copilot Studio just a few weeks ago, that empowers business users to build their own AI Copilots and bots

Flood of Development Creates Weak Links

As people are incentivized to build their own GPTs and chatbots, businesses can embrace innovation from any corner through no-code development. This can help to accelerate digital transformation efforts, automate redundant and mundane processes, and optimize human creativity in efficient ways. However, it can also introduce security and compliance issues, especially considering the guardrails that OpenAI has (and has not) implemented to ensure connectors or of a certain quality. 

This is very similar to other low- and no-code development platforms that allow professional and citizen developers to create their own components that can then be leveraged by others as they build apps, automations, Copilots, and more in various platforms. When this happens in other platforms, the risk for supply chain attacks rises rapidly as the number of components are introduced that are done outside the purview or control of the platform. Components that are not monitored for patches, updates, and proper security controls can be sitting ducks for attackers that can leverage those to gain access to a wide variety of apps, automations, and more that exist across more than just one company. 

This new world of shadow AI development also dramatically increases the risks for data leakage. As business users are integrating and building new GPTs and automations into their corporate environments, people who interact with those resources are likely inserting sensitive data into this GPTs to do complex work. The problem, is that as 3rd party components and GPTs are being used, does anyone really know who’s on the other end? It could well be bad actors, or even actors with good intentions that simply did not tie off the back end of the GPT to ensure people are properly authenticating, that people who access transcripts are who they say they are, and that they are not leaking data.

With OpenAI’s GPT store launching this week, security teams need to stay vigilant and enforce strict controls to ensure that as people are interacting with various GPTs that they are not sharing sensitive information that can inadvertently leak data, causing security and compliance failures. 

Lack of Clarity on Quality

Similar to the world of citizen development, OpenAI is aggressively pursuing ways to bring technology closer to people, regardless of their technical backgrounds. However, OpenAI, like any other company, has limits to what they can control, particularly when projects are essentially open-sourced for the world to experiment with. While it is immensely helpful (and important!) to allow business users of all technical backgrounds to create their own resources to get work done more efficiently, the platforms themselves lose control and are unable to reasonably provide clear directions on how to properly discern vetted ‘build-your-owns’ from ones that are potentially riddled with malware or other malicious software that can cause real damage. 

As GPTs become open-sourced, the store will likely be flooded with lots of useful (and not so much…) GPTs that end users can insert and use in their normal day-to-day workflows. As the volume grows, so too does the likelihood that bad actors can infiltrate the store with malicious GPTs that can be used to:

  • Access sensitive data
  • Download malicious software on endpoints
  • Be used as reconnaissance tools, allowing attackers to lay in wait, move laterally, etc. 

Security teams have a tall task to keep up with all of this citizen-led development that is, by definition, happening outside of the purview of IT, security, and the platform makers themselves. 

The Current State of Citizen Development

OpenAI’s foray into building out their platform is similar to what Microsoft has done in introducing Copilot Studio, which empowers any business user to build their own Copilots, GPTs, and AI-powered bots. As the world of low-code/no-code development evolves, people being able to build their own AI Copilots and GPTs was a natural next step and one that has been a long time coming. In democratizing this type of development, less technical business users are forced to make decisions that may be above their technical expertise, keep risky default permissions in place, and more. We highlighted this in a short video (and subsequent blog) that applies directly to Copilot Studio, but many of the lessons learned can be applied here as well. 

All of this is to say that as more people have access to these types of technologies, security needs to step up in their ability to secure and govern them. The challenge is how…

When There’s No Code, How Can Security Keep Up? 

For a long time, application development has been done by a small group of people (app developers, IT pros, coders, etc.) and security was done by scanning code bases and identifying flaws in lines of code, either in static, dynamic, or run-time instances. However, when there is no code, things are different. Security teams need to take the fight directly to the platforms. 

Zenity is the first security and governance platform that can help customers gain control over citizen developers that are building enterprise-grade AI Copilots, apps, automations, and more that happen across a wide variety of low-code/no-code platforms. Our platform is built by integrating directly via APIs to the platforms to provide:

  • Continuous visibility
  • Automatic risk assessment
  • Autonomous governance

Security teams first need to maintain inventory of all AI Copilots, bots, apps, and automations that are created across the enterprise, and then take it one step further by inspecting each individual component that is baked into those apps. However, as more and more people can create their own unregulated components and GPTs, that becomes increasingly challenging. We’re excited to see how all the innovation unfolds as OpenAI broadens their platform and incorporates no-code capabilities, while insisting that security and governance must be in place so that organizations can fully realize the productivity and innovation gains. Come chat with us, we’d love to hear how your organization is leveraging AI and the ripple effects for the security team. 

Subscribe to Newsletter

Keep informed of all the latest news and notes within the world of securing and governing citizen development

Thanks for registering to the Zenity newsletter.

We are sure that you will like it and are looking forward to seeing you again soon.