The New Security Team: Humans in the Loop, AI at the Core

3 min read
(November 19, 2025)
The New Security Team: Humans in the Loop, AI at the Core
5:31

Five years from now, I don’t think our security operations centers will look anything like they do today. 

There will still be people, humans in the loop, because trust and accountability will always matter. But the day-to-day work will look very different. Instead of analysts manually grinding through alerts and dashboards, we’ll see security experts managing teams of AI agents that handle the repeatable, low-value work. 

The result isn’t fewer people, it’s more valuable people. 

The value of a human operator rises when they’re orchestrating intelligence instead of producing it by hand. It’s the same shift we saw in the Industrial Revolution, when people moved from farms to factories and their productivity, and value, increased exponentially. 

From Analysts to Orchestrators

Someone once described the coming era of AI as “having a billion PhD-level interns ready to work for you for free.” I love that analogy. The real challenge isn’t whether those interns exist. It's how you learn to direct them effectively. 

When I think about introducing AI agents into my organization, I approach it in three stages: crawl, walk, run. 

  • Crawl: The agent observes what we’re doing and suggests actions. 
  • Walk: It offers to perform the steps it’s seen us take before. 
  • Run: It acts autonomously when a familiar scenario appears. 

Different companies will reach that “run” phase at different speeds, largely based on change management. AI adoption isn’t held back by technology, it’s held back by fear. Fear of losing control, fear of losing jobs, fear of losing a human to blame when something goes wrong. That said, I don’t see AI as replacing people, I see it ]changing the way we work. 

Start With Governance, Not Tools 

Every CISO I know is being asked about their AI strategy for 2026 and beyond. The right place to start isn’t which model or vendor to choose, it’s governance. 

There are three lenses I use when I think about AI governance: 

  • How the business leverages AI: How do we enable innovation while protecting data, intellectual property, and privacy? 
  • How cybersecurity leverages AI: How can we use AI to strengthen defenses, automate detection, and speed up response? 
  • How attackers leverage AI against us: What does generative or agentic AI mean for our threat landscape, and how do we adapt? 

Understanding these dimensions helps CISOs stay proactive rather than reactive, and keeps AI from being treated as just another “initiative.” 

The Risk of Standing Still 

If you don’t leverage AI successfully, you’ll be left behind when your competitors do. That’s not hype, it’s history. 

In every technological leap, from cloud to DevOps to automation, the organizations that waited too long spent years catching up. 

It’s the same in cybersecurity. Whether it’s optimizing SOC triage, cleaning up identity hygiene, or managing GRC workflows, there’s already clear value in automating low-value tasks. The companies that start now will be better positioned to reinvest that time in higher-value work such as threat hunting, intelligence, and proactive defense. 

Where to Begin 

Start by asking: Which parts of my cybersecurity organization are performing the most repetitive, low-value activities? That’s where AI belongs first. 

For most teams, that includes: 

  • SOC operations – automating alert triage and enrichment. 
  • Identity hygiene – maintaining clean entitlements and accounts.
  • GRC – streamlining evidence collection and compliance checks. 
  • Vulnerability management – identifying and prioritizing remediation. 
  • Third-party risk – handling assessments and renewals. 

These areas share a common trait: repeatable processes. Repeatable processes are fertile ground for agentic AI. 

Building Trust 

Trust is the billion-dollar question. In a traditional security team, when something goes wrong, you have a single throat to choke, a person responsible for a decision. With AI, that accountability gets abstracted, and that’s uncomfortable. 

That’s why startups building AI-driven solutions must focus on trust as their primary goal. Established cybersecurity vendors have spent years earning community confidence; newcomers must prove not only that their AI works, but that it behaves responsibly. 

At the same time, leaders must build internal trust. Teams need to understand that AI isn’t here to replace them, it’s here to remove the grind and help them do more valuable work. 

Leading Through Change 

Change management is as important as the technology itself. If you’ve ever read Who Moved My Cheese?, you know the story: some characters keep returning to where the cheese used to be, while others venture out and adapt. The future of security belongs to those who adapt. 

My advice to new professionals is simple: get comfortable using AI in everything you do. Engage it in every task: brainstorming, writing, analysis. It won’t always be right, but you’ll learn how to use it effectively. 

Adaptability will be the defining skill of the next generation of cybersecurity professionals. 

Becoming AI-Ready 

AI readiness isn’t about perfect data or mature tooling, it’s about process clarity and mindset. If your processes are a mess, automating them only accelerates the chaos. Define your workflows clearly. Build a culture that embraces experimentation. And most importantly, frame AI as a partner, not a threat.

The goal isn’t to eliminate people, it’s to elevate them. When done right, AI makes teams more capable, more creative, and more valuable.