When the League Assembles: What to Expect at the AI Agent Security Summit San Francisco

A Community That Set the Standard
When we assembled the community in New York for the first AI Agent Security Summit, the message was clear. The community valued having a place to share research, swap lessons learned, and connect with peers tackling the same problem of securing AI agents. Many told us it was the first time they had been in a room full of people thinking about these risks with the same level of focus.
Attendees walked away not just with new connections, but with a clearer sense of what it will take to secure such a rapidly evolving technology and how to start applying those lessons inside their own organizations.
Building on Momentum
That experience set the tone for what comes next. Zenity Labs is assembling the league once again to host the AI Agent Security Summit 2025 in San Francisco. The agenda is now live, and it reflects what the community asked for: research, case studies, frameworks, and strategies that make a difference inside the enterprise.

A Program Shaped by the Community
The call for participation drew more than 100 submissions. The result is a robust one day, multi track program with three keynotes, six sessions, six lightning talks, and two panels.
Keynotes will set the stage with Johann Rehberger, independent researcher on exploiting coding agents, Steve Wilson of Exabeam on agents as insider threats, and Zenity’s Michael Bargury on what it will take to make real progress in security from AI.
Sessions will dive deep into vulnerabilities, red teaming, Promptware, observability, and risk management. You will hear from Jack Cable of Corridor, David Campbell of ScaleAI, Jiquan Ngiam of MintMCP, Allie Howe of Growth Cyber, Ben Nassi of the Black Hat board, and Ken Huang, author of Generative AI Security, co leader of the OWASP AIVSS Project, and instructor with the EC Council.
Lightning Talks will bring fast hitting insight from Aderonke Akinbola of Google, Vamsi Krishna Reddy Munnangi of Walmart, Kristen Beneduce of January, Ryan Ray of Slalom, Nate Lee of Trustmind, and Emile Delcourt of OWASP.
Panels will unite leaders from Google, OpenAI, and ServiceNow to discuss building trustworthy platforms, while Stanford University, Glean, OWASP, and Scale AI will tackle why and how to score vulnerabilities in agentic systems.

Why It Matters
AI agents are taking on responsibilities once reserved for humans. They access systems, move data, and make decisions at scale. Security teams need answers on how to govern, observe, and defend them.
The New York event showed the power of bringing this community together. The conversations were practical, candid, and energizing. San Francisco builds on that momentum, with an even broader agenda and more perspectives in the mix.
Join Us
If you are working with AI agents or thinking about how to secure them, this summit is your community.
AI Agent Security Summit 2025 October 8, 2025 The Commonwealth Club, San Francisco
Related blog posts

Identity Isn’t Enough: Why AI Agent Security Requires Runtime Context
Conversations at RSA 2026 circled back to the same topic: identity is the foundation of AI agent security. While...

The Floor Was Selling AI. The Hallways Were Asking for Help.
One man’s perspective on RSA 2026 and what the AI agent security market actually looks like up close. Every year...

Context Engineering Is Security Engineering. RSA 2026 Made the Case.
The Model Isn't the Problem Anymore Cisco polled its major enterprise customers before RSA 2026 and found something...
Secure Your Agents
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo