What Security Leaders Are Really Worried About in the Age of AI Agents
Artificial intelligence is embedded in day-to-day operations across industries, quietly powering workflows, accelerating decisions, and reshaping how organizations interact with data. AI agents in particular are gaining traction, acting autonomously across systems in ways that were not possible even a few years ago.
But beneath the surface of rapid adoption, security and data leaders are confronting a different reality. In a recent Cyber Security Tribe roundtable discussion, a group of CISOs shared a consistent concern: the pace of AI deployment is outpacing the maturity of the controls designed to govern it.
It’s by no means creating panic, but it is creating a growing recognition that the existing security model is not built for what comes next. In the new age of AI advancement, the phrase “what got us here, won't get us there” is true now more than ever as it relates to how we’re securing our organizations.
A Shift From Events to Behavior
One of the most significant changes introduced by AI agents is how activity needs to be monitored and understood. Traditional security approaches rely heavily on discrete events. A user logs in. A file is accessed. A policy is triggered. These are clear, trackable moments.
AI agents do not operate that way. They execute sequences of actions, often continuously, and often without direct human initiation. This makes it much harder to evaluate whether something is normal or risky. Several leaders described the need to move toward behavior-based models that focus on patterns over time. Instead of asking whether a single action is allowed, the question becomes whether the overall sequence of actions makes sense given the agent’s purpose. The intent of the agent's actions is often unknown, which poses risk for organizations.
This introduces a new layer of complexity as organizations must now define what normal behavior looks like for an AI agent, which is not a trivial task. Unlike human users, agents can change rapidly, interact with multiple systems simultaneously, and adapt based on new inputs.
Security teams are beginning to explore approaches that include:
- Establishing behavioral baselines for agents
- Monitoring context around data access and system interaction
- Evaluating intent, not just permission
- How many agents are deployed
- What systems they interact with
- What data they can access
- What identities they operate under, including how they authenticate and what permissions or roles they inherit
This is a fundamental shift and requires new tools, but also a different way of thinking about risk.
Data Governance Is the Foundation, and It Is Often Missing
If there was one area where the conversation converged, it was data governance. Many organizations are still in the early stages of building mature governance programs, and AI is exposing those gaps quickly.
Effective data governance is notoriously difficult to implement because it spans multiple IT functions and requires broad organizational alignment.
Without clear data classification, organizations cannot distinguish between low-risk and high-risk information. Without defined ownership, there is no accountability for how data is used or protected. And without consistent enforcement, policies exist on paper but not in practice.
AI agents inherit these weaknesses and if sensitive data is broadly accessible, agents will be able to access it as well; the difference is that agents can do so at scale and at machine speed. Several participants noted that their immediate focus is not on advanced AI-specific controls, but on strengthening foundational capabilities. This includes building formal governance structures, establishing clear ownership models, and implementing consistent classification and tagging strategies.
In many ways, AI is acting as a forcing function. It is making it impossible to ignore problems that have existed for years.
The Visibility Problem
Another major theme was visibility. In many organizations, there is no single view of where AI agents exist or what they are doing.
Agents are being created across teams, often within business units or specific functions, without centralized tracking. This leads to a fragmented environment where security teams lack even a basic inventory.
The challenge is not only technical, but organizational. Decentralized environments make it difficult to enforce consistent standards or even to gather accurate information.
For many teams, the first step is discovery. Before they can govern or secure AI agents, they need to identify them. This includes understanding:
Without this visibility, every other control becomes less effective.
Sensitive Data Changes the Stakes
While AI introduces many new considerations, the core risk still centers on data. Specifically, sensitive data. Organizations represented in the discussion are dealing with highly valuable information, including financial records, healthcare data, and proprietary business assets. AI agents are increasingly interacting with this data as part of their workflows.
The challenge is not simply preventing access. In many cases, access is necessary for the agent to perform its function, but the question is whether that access is appropriate in context.
Security teams are beginning to think in terms of intent. Why is the agent accessing this data? Is it aligned with its role? Does the action fit within expected patterns? Even more importantly, are we even able to determine intent effectively? This is a more nuanced approach than traditional access control. It requires a deeper understanding of both the data and the processes that surround it.
Human Oversight Is Still Essential
Despite the capabilities of modern AI systems, there is broad agreement that fully autonomous operation is not yet viable for most organizations. Human oversight remains a critical component of risk management. This is especially true for high-impact decisions or interactions involving sensitive data.
Many organizations are adopting a hybrid model in which AI handles routine tasks, but humans remain involved in validation and approval. This can take several forms, from reviewing outputs to approving specific actions before they are executed. This approach is not seen as a limitation, but as a necessary step while governance and monitoring capabilities continue to mature. The question remains: how long will “human-in-the-loop” remain viable if the ultimate objective is fully autonomous workflows that unlock the full potential of agentic AI?
The Skills Gap and the Need for New Models
Another challenge that emerged is the lack of established frameworks for managing AI agents. Security teams are being asked to apply existing models, which are often designed for human users, to a fundamentally different type of entity.
At the same time, there is a shortage of expertise in how to manage these systems effectively. Teams need to understand not only how AI works, but how it behaves in real-world environments and how it interacts with their systems and data. Training is becoming a priority, both for security professionals and for the business teams deploying AI solutions. Without a shared understanding, it becomes difficult to implement consistent and effective controls.
There is also a growing recognition that new models are required. Treating AI agents as if they were human users does not fully capture the risks they introduce.
Rethinking Security for an AI-Driven World
What emerges from these discussions is not a single solution, but a broader shift in perspective. Organizations are beginning to rethink what security means in the context of AI. The traditional model, built around static roles and discrete actions, is giving way to something more dynamic. Behavior, context, and continuous evaluation are becoming central concepts.
At the same time, foundational elements like data governance and visibility are proving to be more important than ever. Without them, even the most advanced controls will fall short. The path forward is not about replacing existing practices, but about extending them. It is about adapting to a new reality in which systems are more autonomous, data is more accessible, monitoring becomes more complex, and the line between user and machine is increasingly blurred.
As one participant summarized it, “the challenge is not just securing a new technology, it is redefining the approach to security itself.”
Share this
You May Also Like
These Related Stories

Rise of Shadow Agents: How Unseen AI Workers Reshape Your Security

The Right Role for Agentic AI in Security Operations


