Key Takeaways
- Cyberhaven introduced Agentic AI Security, expanding its platform to govern autonomous AI agents running on endpoints.
- New capabilities focus on visibility, observability, and real-time controls to counter the surge in shadow agents.
- Research from Cyberhaven Labs shows a 276 percent rise in endpoint AI agent adoption, signaling a major shift in enterprise AI behavior.
The rise of autonomous AI agents has been quick, messy, and far more consequential than many security leaders expected. What began as experimentation with chat-based tools has evolved into a new generation of endpoint-resident agents that perform tasks, access sensitive systems, and sometimes run without oversight. Cyberhaven is stepping directly into this gap with the launch of its Agentic AI Security offering, a move that reshapes how enterprises think about the control plane for AI.
This launch, announced in Mountain View on March 24, 2026, arrives at a moment when AI adoption inside organizations is not only accelerating but also decentralizing. Tools no longer live exclusively in browsers or SaaS interfaces. They are proliferating across laptops, developer workstations, and unmanaged internal environments. That shift sounds subtle, but it is changing the security posture in ways that many governance programs have not accounted for.
When AI was just generating text, images, or code snippets, guardrails around prompts and SaaS usage were sufficient. Now these agents are acting autonomously, often with deep access to corporate data and tools. According to Cyberhaven Labs, enterprise adoption of endpoint-based AI agents climbed by 276 percent in the past year. For context, that growth rate is more than three times higher than what GenAI SaaS platforms saw over the same period. Adoption of endpoint coding assistants, meanwhile, surged from 20 percent to 50 percent in 2025.
This shift raises a difficult question: if an AI system can execute work on a machine, who exactly is monitoring what it touches or how it behaves?
Cyberhaven CEO Nishant Doshi underscored this concern, noting that AI is now performing tasks rather than simply generating content. That nuance matters because enterprise governance frameworks still mostly look at what users type into systems like ChatGPT or Google Gemini. They offer little insight into what AI agents are actually doing on endpoints. Doshi argued that security must operate in real time at the moment of action, not after something has already occurred.
Most existing AI security categories were built around high-level questions such as who is using a particular model or what data is being sent to a cloud provider. Useful questions, certainly, but increasingly incomplete. Endpoint-based models and agentic systems have created an entirely different risk surface. The bulk of activity never crosses a SaaS boundary and never hits an API monitoring tool.
That said, enterprises are not ignoring the problem. Many are scrambling to map where agents are running, which tools they use, how they authenticate, and whether they access regulated or sensitive data. Still, until this announcement, the market lacked a unified way to observe and control agent behavior directly on endpoints.
Cyberhaven positions its new Agentic AI Security capability as an expansion of its existing AI and data security platform. The approach revolves around three pillars that are simple on paper but technically challenging in practice.
One pillar is visibility, which includes discovering AI agents, MCP servers, and the connections those agents establish. Another is observability, which involves tracking execution paths, data access, and tool usage. The final pillar is real-time control. This one may prove the most impactful because it aims to stop unsafe or unauthorized actions at the exact moment an agent attempts them. For organizations wrestling with insider risk or unintentional data exposure, that kind of intervention could provide a much needed safeguard.
If the trend lines continue, the endpoint will become the dominant execution layer for AI. It is already edging into that role. Companies relying solely on SaaS visibility or API logs are likely to miss the majority of agent activity. This may sound dramatic, but it is similar to the early cloud era when security programs still assumed that sensitive workloads lived inside data centers. By the time cloud adoption caught up, the tooling had to catch up as well.
A useful parallel can be seen in how cloud security posture management tools emerged once the industry realized that infrastructure had moved outside the perimeter. Now, AI is undergoing a similar migration. Cloud transitions taught security teams that once execution environments proliferate, visibility becomes the foundation for any meaningful control. Agentic AI is tracing that same trajectory.
Cyberhaven says it is defining a category of AI-native data security specifically built for autonomous systems. Whether the market adopts that terminology remains to be seen, but the underlying need is clear enough. Organizations want to enable AI without losing control of their data. They want the productivity benefits, not the unmanaged agents quietly navigating sensitive repositories.
Agentic AI is moving faster than policy frameworks, and enterprises are eager for tools that give them clarity about what is happening on their own endpoints.
The larger story is still unfolding. AI is becoming more embedded, more independent, and more operational. The question now is whether governance, tooling, and cultural norms can adapt quickly enough to keep pace. Cyberhaven's new release signals that the industry is at least starting to recognize where the next set of risks will emerge.
⬇️