
One man’s perspective on RSA 2026 and what the AI agent security market actually looks like up close.
Every year at RSA, there's a theme, not the official one printed on the lanyards, but the real one. The one that shows up in every booth conversation, every hallway argument, every dinner where people finally say what they wouldn't say on a panel.
A few years back, it was cloud. Then zero trust took over and held the room for a while. XDR came through and confused everyone. Identity had its moment. Last year, AI was everywhere, but you could still feel people hedging, interested but not quite convinced. There was still a "wait and see" energy underneath all the enthusiasm.
This year felt different, and I don't mean louder or bigger or more crowded with vendors chasing the same buzzword. I mean the energy underneath all of it had shifted in a way I haven't felt before.
I've been going to this conference long enough to know the difference between a theme the industry is talking about and a problem the industry is genuinely losing sleep over. In previous years, AI was the cool kid everyone wanted to be seen with. This year it showed up and started making decisions without asking anyone.
Security leaders weren't walking the floor looking for the next tool to add to a slide. They were trying to figure out how to govern systems their enterprises have already deployed, that are already taking autonomous actions, in environments they're still trying to map.
The question isn't "should we pay attention to this?" anymore. The question is "how far behind are we, and how fast can we close the gap?" That's a meaningfully different conference.
It's a Gold Rush and Many are Still Selling Shovels.
Walk the expo floor, and the first thing you notice is sheer volume. Over 600 vendors. Roughly 37% of booths have some version of "AI" in their primary messaging. And underneath all of it, a pretty consistent gap between what the signs said and what the screens showed.
I spent a lot of time this week trying to sort the market into something useful. What I kept coming back to is four patterns, not four categories, because a lot of vendors are blurring the lines intentionally.
The rebranders.
These are the established platforms, some of them among the largest security companies in the world, that have taken existing capabilities and wrapped them in new language. The product didn't change but the pitch deck did. A subset of this group has a more ambitious play: acquire a startup with real agent security DNA, bolt it onto the platform, and present the whole thing as a unified story before the integration is anywhere near finished. It feels much closer to filling a gap in a spreadsheet of category coverage than building a comprehensive solution. The seams show if you know where to look, or worse, ask the folks currently using the bolted-on solutions firsthand, and you'll quickly hear the frustration.
To be fair, some of this is fine. If you already have deep visibility into identity or data, and AI agents are now generating risk in those same layers, your existing product may genuinely be part of the answer. The issue is when "part of the answer" gets packaged as "we've solved agent security," and buyers don't have enough context yet to push back.
The incumbents know how enterprise buying works. They know that if they can shape the RFP language before a newer vendor gets to the table, they don't have to win on depth. They just have to win on familiarity, existing usage, and streamlined procurement. A few of them were doing exactly that this week.
The point solutions.
A meaningful chunk of the floor was vendors who are genuinely good at one thing (model-layer protection, non-human identity, data security posture) and are now positioning that one thing as "agent security" because the category has gravity. Some of them are being honest about it. Some of them aren't. The tell is when you ask a specific question about a use case they don't cover, and the answer is a redirect. "That's a great question, actually, what we see customers prioritizing is..." You learn to recognize the pivot, as their responses tend to steer away from gaps and toward whatever they're strongest at.
If a vendor can't give you a direct answer about what their product does at runtime, when an agent is making decisions in your environment, they probably don't have a runtime story yet. Build-time matters, but it doesn't finish the job. A useful follow-up question: ask them which customers are actively using their solution for agent security in production right now. The answers get interesting fast.
The governance plays.
This one is interesting because the messaging sounds right. Visibility. Policy enforcement. Shadow AI. Control. And for a certain buyer, one who is still trying to figure out which AI tools their employees are using, these products are genuinely useful. But there's a ceiling. Governance visibility tells you what's happening. It doesn't stop an agent from doing something it shouldn't.
When I dug into demos at RSA and pushed on the enforcement question, not "can you see it" but "can you stop it," the answers got soft quickly. Policy without enforcement is a report. Most organizations are already drowning in findings and want to move toward prevention and remediation, not a prettier dashboard to watch the problem in. Buyers are starting to understand that distinction.
The purpose-built platforms.
This was the smallest group and the most interesting to talk to. A handful of vendors, not many, have clearly been building with agents as the primary design constraint, not as an afterthought. You can tell almost immediately. The UI reflects agent behavior, not endpoint behavior or user behavior reframed. The product handles questions about runtime that the others sidestep. Coverage isn't limited to one environment because the problem isn't limited to one environment. Enterprises aren't running agents in one place. They're running managed SaaS agents, alongside homegrown pipelines, alongside endpoint developer tools, alongside whatever their business units have deployed without asking IT. Any vendor with a credible answer has to cover all of that.
Here's the thing about demos that nobody says out loud: you can tell within a few minutes whether what you're looking at is real. Real products have friction. They have specificity. They have edge cases built in. Hardcoded demos are smooth in a way that real products aren't, because real products have to handle the chaos of real environments. The vendors who let you go off-script, who can answer the "what if" questions in real time without pivoting to a slide, those were the minority this week. But they were the ones worth the longest conversations. Those sticking to the script often aren’t equipped to deal with the nuanced questions from practitioners dealing with real-world enterprise agentic risks.
The Real Conversations We’re Having
Strip away the booths and the signage, and what you're left with is security practitioners trying to answer three questions that nobody had a clean answer to even two years ago.
Identity
Not "does this agent have a credential," that part's easy. The harder question is what agent identity even means when an agent can change its own code, call out to a tool it pulled from the internet, receive instructions through a channel you didn't anticipate, or operate under a human identity without that human's awareness.
Static identity models were built for users. Users log in, they do a thing, they log out. Agents don't work that way. An agent can start a session as one thing and end it as something functionally different. The access granted at the start may have nothing to do with what it's actually doing by the time you're looking at a log. The identity conversation kept coming back to this all week, and the vendors with the clearest answers were the ones who'd built their product around the lifecycle of an agent's actions, not just the moment of access provisioning.
The runtime vs. build-time debate.
There's a camp that believes if you harden the agent before it deploys, perform rigorous testing, ensure clean permissions, and set a clear scope, you've managed the risk. And that matters. But it's not sufficient, because agents operate in environments that change. Data changes, tools change, inputs change. An agent that behaved correctly in testing can encounter something in production that shifts what it does entirely.
Runtime is where agent security is actually determined. The conversations I kept having with practitioners this week weren't about what got caught at build-time. They were about what happened after. We've watched the shift-left movement reshape AppSec over the last decade, and building security in early is genuinely valuable, but attackers don't stop at the build stage, and neither can your defenses. Agents need a comprehensive, defense-in-depth approach that doesn't treat deployment as the finish line.
Ownership.
Inside most enterprises right now, nobody fully owns this problem. Security teams are being asked to govern AI systems they didn't build. The AI teams that built them aren't security people. The infrastructure teams are watching traffic they can't interpret. And leadership is applying pressure to move faster on adoption, while the CISO is trying to figure out what "securing this" even means. The vendors who resonated most this week weren't just selling a product. They were helping buyers define the problem, and that matters more than it sounds. In an early market, whoever helps the buyer understand the problem shapes what they believe the solution should look like.
Separating Signal From Noise
By midweek, I had a shortlist of questions I was using to calibrate every conversation. Not gotchas, just the questions that quickly separated vendors with an enterprise-ready product from vendors with an enterprise-ready pitch.
Can you cover agents across managed SaaS platforms, cloud-based pipelines, developer tools, and homegrown deployments in a single platform? Most couldn't. SaaS-managed agent coverage in particular turned out to be the gap that revealed itself most consistently. A lot of vendors had built for the infrastructure and homegrown side of the house, which is legitimate, but that's a slice of where agents are actually running in most enterprises right now. Managed SaaS is where the volume is, and the vendors who couldn't speak to it clearly were the ones whose architecture was probably designed for a different problem.
Is your enforcement at the action level or the input level? This is the question that found the ceiling on a lot of governance products, model-layer solutions, and proxy or gateway-based approaches. Blocking a bad prompt is not the same as controlling what an agent is allowed to do mid-session based on its context. A gateway can see what passes through it. It can't reason about what an agent has already done, what it's planning to do next, or whether the combination of actions across a session adds up to something you wouldn't have sanctioned if you'd seen it coming.
The distinction matters enormously in practice. A context-aware enforcement model, one that can shrink or expand what an agent is allowed to do based on what it's actually encountered, is a fundamentally different architecture than sitting in the traffic lane with a list of bad words. Both vendors and buyers are still figuring out how to talk about this cleanly, but the practitioners who've been burned by insufficient controls knew exactly what I was asking.
What does your detection look like when the threat isn't a bad prompt but an agent quietly doing something it was never supposed to do? This one cleared the room fastest. Most detection stories are still built around catching known bad inputs. But the more interesting problem is intent drift: an agent that starts a session with a legitimate purpose and ends up somewhere very different, not because it was attacked in an obvious way, but because context accumulated, instructions compounded, and nobody was watching the trajectory. Understanding whether an agent's actions are consistent with its original intent across an entire session, rather than just at each individual step, is a harder problem than filtering inputs. The vendors who'd thought seriously about it were the most interesting technical conversations of the week.
What I Think Comes Next
I realize a conference only represents a sample, but in the case of RSA, it is one of the largest sample sizes in the cyber community. The floor is loud and unrepresentative in specific ways. But some signals are clear enough to be worth saying out loud.
Agent identity is going to force a reckoning in the identity market. The NHI space has real momentum and real enterprise traction, but governing a credential is not the same as governing what an agent does with it. The identity story for agentic AI has to cover static credentials, dynamic in-session identities, identities inherited through tool calls, and implicit identities established through agent-to-agent communication. The vendors who solve that full picture are going to make the ones who only solve credential management look incomplete, and buyers are going to figure that out faster than most people expect.
The interesting directional question is who gets there first. An agent security platform that expands its identity coverage has the full behavioral context already. An NHI vendor trying to build the other direction has to reconstruct the agent's runtime world from scratch. One of those feels like a natural extension. The other feels like a different product.
Governance and security are going to collapse into the same buying decision. Right now, they feel like separate conversations, one for the risk and compliance team and one for the SOC. That separation won't hold once enterprises start dealing with real incidents involving autonomous agents. When an agent causes a real problem, the response question and the governance question merge into a single question. Whoever owns the security story also needs to be able to speak to governance, and vice versa. The vendors who bridge that now have a durable advantage.
The trust model for agents is going to become the defining design question in this category. Organizations will give agents more autonomy over time. This is already happening, and it is going to accelerate. The security layer has to evolve with that, and most of what I saw on the floor is not built for it. Static policies set at deployment will not hold. The agents that get the most access, execute the most actions, and touch the most sensitive systems are the ones that will be targeted most aggressively, and they are also the ones most likely to be over-trusted because they have been running without incident.
Security has to be continuous and context-aware, able to tighten or expand based on what an agent is actually doing in a given moment, not what it was configured to do six months ago. The analogy I kept reaching for this week is how development teams have learned to give AI coding assistants more autonomy gradually, not all at once, with guardrails that match the level of trust those tools have earned. That is the model. The vendors building for that dynamic are the ones that will matter in two years. The ones optimizing for today's point-in-time posture checks will not.
The winner in this category likely won't be the vendor with the biggest booth. The dynamics of an early market consistently favor vendors who can move at the speed of the problem they're solving, and the evolution of AI agents is happening at a speed I’ve never seen. Enterprise buyers who've already deployed agents at scale, aren't necessarily looking for the most trusted brand in security. They're looking for the vendor who gets it, can ship fast, and can run alongside them as the landscape keeps shifting. Several of those conversations happened in rooms that weren't on the main floor at all.
RSA 2026 marked the point at which the urgency of security teams to govern agents finally caught up with the urgency of the business to deploy them. Those two forces have been running at different speeds for the last two years. This week they converged, and with that convergence, the gap between the vendors who are genuinely ready and the ones who are performing readiness got a lot harder to hide. The vendors who defined the last era of enterprise security built their reputations in markets that moved slowly enough for them to be right for a long time. This market doesn't move that way. And that, more than anything else, is why this year's conference felt different.
The questions got specific. The urgency got real. And the gap between the vendors who are genuinely ready and the ones who are performing readiness got harder to hide.
That's usually when markets start to move.
All ArticlesRelated blog posts

Context Engineering Is Security Engineering. RSA 2026 Made the Case.
The Model Isn't the Problem Anymore Cisco polled its major enterprise customers before RSA 2026 and found something...

RSA and DC Dispatches: Agentic AI Security Is the Story, Government Policy Needs to Catch Up
Fresh off two weeks of back-to-back meetings in Washington, DC, and on the floor/in the wings of the RSA Conference,...

My First RSA: Agents, Challenges, and Community
I am no stranger to conferences, and certainly no stranger to security conferences. Over the years, BlackHat and...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo