Building Trust Into Automated Cybersecurity Decisions
As security teams push to reduce mean time to detect and mean time to respond, automation is taking a larger role in how incidents are triaged, investigated, and contained. In the Cyber Security Tribe Annual Report, 73% of respondents said their organizations are already using or developing agentic AI within cybersecurity. That level of adoption shows that autonomous and semi-autonomous systems are no longer a future consideration for security operations. They are already influencing how decisions are made, how quickly actions are taken, and how much responsibility is being delegated to systems rather than people.
This article is part of Cyber Security Tribe’s wider editorial series based on findings from the annual report and expert conversations held at RSAC 2026 in San Francisco. Across the series, senior cybersecurity leaders and practitioners were asked to respond to key themes raised by the report, including agentic AI, AI governance, identity-centric security, quantum readiness, employee concerns, and investment priorities. For this article, we focused on a question that goes to the center of operational trust: "As organizations pursue reduced MTTD and MTTR through automation, what governance mechanisms must change to preserve explainability and accountability?"
The expert perspectives that follow examine how governance must keep pace with machine-speed decision-making. They highlight the need for stronger decision traceability, clearer ownership, auditable reasoning paths, defined escalation thresholds, and boundaries around what automated systems are permitted to do without human involvement. They also explore why logging actions alone is no longer enough if organizations cannot explain the logic, confidence, and policy conditions behind those actions.
This article explores how experts are thinking about governance as an operational control built into autonomous workflows, rather than a policy layer applied after deployment. That distinction will shape whether automation strengthens trust in security operations or weakens it.
Thought leaders that contributed to the article include:
Anurag Gurtu, CEO, Airrived
Reducing MTTD and MTTR through automation requires governance to evolve from reactive oversight to embedded accountability. Traditional governance assumes humans make decisions and systems record them. In agentic environments, systems makes decisions so governance must move upstream into architecture. This means:
-
Built-in decision traceability (why an action was taken, not just what was done)
-
Policy-aware execution boundaries
-
Tiered autonomy levels based on risk classification
-
Immutable audit logs tied to reasoning paths
-
Human override and intervention protocols
Governance can no longer be an afterthought layered on top of automation. It must be co-designed with autonomy from day one. The organizations that succeed will not be those that automate the fastest, but those that architect explainability, reversibility, and accountability into every autonomous workflow.
In the agentic era, trust is not preserved by slowing automation. It is preserved by engineering transparency into it.
Stephanie Schneider, Cyber Threat Intelligence Analyst, LastPass
It’s both a real strategic shift and a response to regulatory pressure, but identity‑centric security reduces risk in ways boards can feel. From my perspective as a cyber threat intelligence analyst, most intrusions start with compromised access. Focusing on identity shrinks the attack surface by enforcing strong authentication (ideally phishing‑resistant), permitting least privilege access, and continuously verifying identity.
The net effect of all three investment priorities (Zero Trust, Risk & Compliance, and IAM) can mean fewer successful initial access events, faster containment when something does slip through, and clearer accountability that satisfies regulators and reassures the board.
Melissa Ruzzi, Director of AI at AppOmni
When switching to AI automation, it’s important to understand the automation workflow itself and potential differences to the otherwise existing workflow. For example, if certain notes, reports or evidence are currently required and used in the existing workflow, and will not be available with AI automation, the workflow itself needs to be reviewed to evaluate if the changes are acceptable or not. Just being different doesn’t directly mean the new AI automation is not acceptable, it only means a proper review is needed.
Pay special attention to specific requirements in this review, for example those related to compliance. In general, AI is a very good tool to use for explainability and accountability, as its main functionality is dealing with text and language. But the quality of its output depends on the specific architecture and implementation. Guide the transition to AI automation with a well-rounded understanding or documentation of the minimum requirements for proper explainability and accountability. And expect the workflow to be different with automation. Most often than not, these changes bring improvements to the existing processes.
Kevin Paige, Field CISO at ConductorOne
The pursuit of faster detection and response is pushing organizations toward a dangerous trade-off: speed at the expense of understanding. When a human analyst detects and responds to a threat, there's an inherent audit trail — they can explain what they saw, why they acted, and what the outcome was. When an AI agent does the same thing in seconds, that explainability often disappears.
Three governance mechanisms need to evolve. First, decision logging must become a first-class requirement for any autonomous system. Every action an AI agent takes — every alert it escalates, suppresses, or responds to — needs to be recorded with the reasoning chain that led to that decision. Not just what happened, but why.
Second, accountability must be assigned before autonomy is granted. Every AI agent needs a human owner who is responsible for its behavior. If an agent autonomously remediates an incident incorrectly, someone has to answer for it. Without that ownership, organizations end up in a governance vacuum where the agent acts, but nobody is accountable for the consequences.
Third, exception handling needs to be redesigned. Current governance models assume humans are in the loop for edge cases. As MTTD and MTTR shrink toward machine speed, there may not be time for human review before action is taken. Organizations need pre-defined escalation policies that the agent follows when it encounters situations outside its confidence threshold — rather than defaulting to action or defaulting to inaction, it should default to a defined protocol.
Chris Camacho, COO and Co-founder of Abstract
Automation in security operations is moving faster than the governance structures around it. Many policies were written for environments where humans reviewed alerts, gathered evidence, and made the final call. When systems begin handling investigation and response steps automatically, those older models no longer provide enough visibility or accountability.
The first change organizations need is decision traceability. Every automated action should leave a clear record showing what triggered it, what information was used, and why the system chose a specific response. Security teams need to be able to reconstruct the chain of events after the fact, not just see the final outcome.
Second, ownership must be defined before automation is deployed. Someone must be responsible for the logic, the permissions granted to automated processes, and the consequences if something goes wrong. Without clear ownership, automation can quietly introduce risk that no one is actively managing.
Finally, organizations should apply different levels of automation depending on the risk of the action. Tasks such as enrichment and investigation can run automatically with little downside. Actions that affect access, production systems, or customer data should require additional checks or human approval.
Speed is important in modern security operations, but speed without transparency and accountability can create more problems than it solves. The organizations that succeed with automation will be the ones that treat governance as part of the system, not as a policy written after the fact.
Chip Witt, Security Evangelist at Radware
Governance has to evolve along with the speed of automation. Organizations need clear, plain language explanations for every action an agent takes, along with complete logs that show how the decision was made. High impact actions need human approval or a dual control model. Faster outcomes are valuable, but only if people can still understand what happened, why it happened, and who is ultimately responsible.
Finnbogi (Bimbi) Finnbogason, Chief Technology Officer (CTO) and Co-founder and Head of Varist Threat Labs
Speed without signal quality is just faster failure. The governance question isn't how to slow automation down, it's how to ensure that what's driving the automation is explainable enough to stand behind.
When a file is involved (a download, an attachment or an installer), that's your highest risk, most ambiguous alert type, and it's where detection confidence matters most. Hyper-fast, evidence-backed file analysis is the single biggest lever organizations have for making automated triage both faster and defensible. You close the loop on MTTD and MTTR simultaneously without sacrificing the audit trail that accountability requires.
Willie Tejada, GM & SVP, Aviatrix
We’ve gotten very good at measuring how fast we detect and respond. We’ve gotten very bad at asking whether we can see the threat in the first place. Governance must shift from static policy to continuous enforcement and that starts with visibility into east-west cloud network traffic, not just endpoint telemetry.
It is no longer sufficient to demonstrate that a policy exists. Organizations need to show that zero trust principles are actively enforced at runtime, particularly between workloads and across cloud environments.
Automation decisions must also be traceable. If a system isolates a workload or blocks traffic, there should be a clear audit trail explaining why. Accountability does not disappear because a machine made the decision.
Modern governance requires pervasive telemetry, runtime visibility, and architectural controls that can be demonstrated under regulatory or board scrutiny.
Shashi Kiran, Chief GTM Officer, Nile
Increasing forms of automation have always been deployed over the past couple of decades to reduce MTTD and MTTR. Without automation public cloud deployments would not be possible at scale and a lot of best practices have evolved from cloud-scale deployments that are adapted by enterprise-scale architectures.
This is absolutely a necessity given the dynamic changes at an exponential scale. Automation is paving the wave for autonomous mechanisms. Explainability is a derivative of trust, and organizations must ensure the mechanisms proceed through the trust barrier. This is where accountability must evolve periodically, with graded trust based on passing through established criteria for getting to root cause through autonomous deployments and focusing on both the quality of detection as well as the speed of detection.
Speed should not be compromised by accuracy. Designated policy and governance guidelines will help establish accountability and audit control over time.
Justin Foster, Forescout CTO
Traditional controls are no longer enough when AI systems are influencing or executing actions inside security operations. If organizations want faster detection and response through automation, governance has to evolve to account for how data and decisions move through AI applications, agents, and workflows that teams may not be able to directly observe or validate.
To reduce MTTD and MTTR safely, governance must follow the AI workflow itself. CISOs need clearer visibility into how data moves through AI applications, stronger ownership over the systems and supply chains behind them, and better validation that autonomous actions stay within scope and can be trusted.
The real challenge is that AI is evolving faster than the controls built to manage it. That means preserving explainability and accountability now depends less on legacy governance models and more on disciplined oversight of data exposure, model behavior, and operational accountability.
Atif Ghauri, President & Chief Operating Officer, UltraViolet Cyber
Reducing MTTD and MTTR through automation is achievable, but governance must evolve from managing tools to governing decisions. Traditional security governance was built for humans—tickets, approvals, and periodic reviews. Agents operate at machine speed, so controls must operate at machine speed as well. Start with accountability: every agent needs an executive owner, a documented mission, and a clearly defined scope of authority. Treat agents as privileged identities: least privilege by default, separation of duties, and explicit approvals for new capabilities, new data access, and new action types.
Explainability has to be engineered. Every agent-driven outcome should produce an auditable record of what it observed, what it inferred, what actions it took, which tools it invoked, and which policy or playbook authorized those actions. That record should be tamper-resistant and easy to review, so you can defend decisions to regulators, customers, and your board. In parallel, define measurable “safe autonomy” criteria—error budgets, confidence thresholds, and maximum blast radius—so the organization is clear on when an agent can act, when it must ask, and when it must stop.
Governance also requires continuous validation: simulation against known scenarios, adversarial testing (including input manipulation), drift monitoring, and periodic re-certification as systems and threats evolve. Finally, formalize guardrails for uncertainty: escalation thresholds, mandatory human approval for high-impact actions, and a kill switch with safe-mode defaults. The goal is faster response without sacrificing responsibility.
Juan Pablo(JP) Perez-Etchegoyen, Chief Technology Officer, Onapsis
The shift toward Agentic AI represents a pivot from "tools that assist" to "entities that act." With 73% of organizations already engaging with this technology, the strategic advantage lies in proactive orchestration—using agents to neutralize threats at machine speed. However, recent history proves that "autonomy" without rigid guardrails introduces unacceptable systemic risk.
We saw this play out in the December 2025 AWS disruption, where the agentic tool Kiro reportedly decided to "delete and recreate the environment" to resolve an issue, leading to a 13-hour outage for certain services. While AWS attributed this to "user error" in access controls, the incident highlights a critical accountability gap: when an agent acts with operator-level permissions, a single autonomous misstep can have a massive blast radius. Further risks were exposed by the Amazon Q Developer near-miss, where a prompt injection could have triggered mass infrastructure destruction if not for a fortuitous syntax error in the attacker’s code.
To preserve accountability, governance must shift from oversight of code to oversight of intent. Organizations must implement "Execution-Gated" AI agents that require human validation for high-impact actions and immutable audit logs of an agent’s reasoning. We aren't just managing software anymore; we are managing digital personas. Ensuring these agents operate within defined ethical and operational "guardrails" is the only way to harness their speed without sacrificing systemic stability.
John Paul (JP) Cunningham, CISO at Silverfort
As organizations accelerate automation to reduce MTTD and MTTR, the governance model around AI‑driven security operations must evolve just as quickly. The priority must be to ensure that every automated action remains fully explainable, auditable, and attributable, and that there remains the ability for humans to audit and respond. The requirement must be for stronger decision‑logging, clearer human‑in‑the‑loop boundaries, and formalized oversight of how models are tested, deployed, and monitored for drift.
At the same time, AI systems must be governed with the same rigor applied to any high‑impact security control. That means treating AI agents as identities with enforceable privileges, maintaining separation of duties between builders and approvers, and giving boards visibility into how automation is influencing operational outcomes. The goal is simple: gain the speed benefits of automation without compromising accountability or trust.
I'll continue to emphasize that with AI, the principles of identity security, such as segregated, least-privileged, role-based access, and so on, have never been more important. Our goal must be to make small, single-purpose task-driven minions capable of fast, autonomous action, but bounded by specific purpose and strong controls/guardrails. We almost have to reverse how we approached service accounts and NHI, and turn the paradigm upside down, because if we don't, the "Terminator movie" will become the "Terminator reality,” and we will lose control of our AI creations, leading to disastrous effects.
Albert Ziegler, Head of AI at XBOW
AI-powered systems scale in ways security teams haven't had to deal with before. That has the potential to reduce MTTD/MTTR, but only if the challenges that come with that scale are met accordingly. If you have a detective who'll only suspect an innocent person 10% of the time, that might be considered acceptable, maybe even good... but if you scale that detective across 1000 simultaneous investigations, you now have a hundred false positives. While that theoretically might qualify as "detected", it won't be useful.
For developers, it will just lead to tool fatigue. The crucial shift is from approval-based workflows to evidence-based workflows. In approval-based workflows, the machine turns up leads or suggestions, and the human checks or evaluates them. But evidence-based workflows are designed around the principle that the human's time is the most valuable resource -- the machine can only buy it with concrete, pre-validated evidence. Not weaknesses. Exploits. But that increased autonomy requires equally strong constraints: there needs to be a crisp definition of what the AI is allowed to do in its quest for evidence, strict validation of its actions according to these policies, and full auditability of how a finding was pursued.
Melissa Bischoping, Sr Director, Security & Product Design Research, Tanium
Responsible and resilient AI deployment requires “humans in the loop”—but we must be intentional about where those humans sit and what they’re responsible for. It’s no longer enough to collect the logs and hope for the best. The logs must be consumed and analyzed so they tell a story about outcomes, not just actions taken. That’s how we build our AI muscle as practitioners: learning and refining autonomous workflows from how the AI behaves. Governance must be ongoing, agile, and rapidly iterative as these technologies advance—and that may be a real friction point for traditional GRC.
Culturally, analysts and hands-on practitioners will need new skills in communicating outcomes and risk. Over the next few years, I expect the people who used to be hands-on for every config and change to morph into agentic supervisors and orchestrators. They won’t review every command line in depth; they’ll need to coach AI toward organizational and industry best practices instead. That’s how practitioners scale their impact.
Curt Aubley, Chief Executive Officer/Co-Founder at Sevii
First, it’s important to clarify terminology. Automation is simply performing discreet, well-defined tasks, while autonomous action requires the application of independent reasoning and decision making. Additionally, while the R in MTTR typically refers to “Remediate,” in practice, standard metrics stop at a recommended remediation process, at which time the remaining steps (isolate, clean the system, and remove isolation and learn) vary widely based on organization and teams performing remediation.
Both automation and autonomous actions can be done without human intervention, but automation occurs on a much narrower scope, with very specific rules and governance guardrails, and ultimately final action is at a minimum overseen by humans, if not requiring final approval. While this reduces some work on security teams, it also creates extra steps.
Autonomous action, when governed properly, literally has the ability to not just reduce tasks, but make substantive decisions and vastly reduce the entire workload of a security operations and engineering team. As one would expect, this requires more complex governance mechanisms. A governance framework for autonomous AI requires:
An AI policy mechanism that enables customers to assign asset groupings/classifications, with associated risk scoring, and define what actions are permissible without intervention
A real-time capability that allows security teams to intervene and adjust approved actions based on changing environmental, threat or risk dynamics at any point in the agentic pipeline
Individual API access controls that can be disabled in response to external or internal security concerns
A “kill switch” to halt all agentic interactions as a last resort fail safe
Transparency of Agentic AI agent actions with corresponding auditable logs
Samuel Hassine, CEO/co-founder at Filigran
Governance can't stop at policy documents, it must operate at runtime and be enforced to be effective. Every automated decision should be observable, auditable, and owned, with humans explicitly accountable for what agents are allowed to do. If an organization can't explain why an agent acted, they shouldn't let it act at all.
Concretely, this means three things need to change. First, audit trails must become first-class infrastructure, not afterthoughts. Every agent action — every enrichment, every prioritization, every correlation — needs to be logged with full provenance: what data was used, which model reasoned, what confidence threshold triggered the action. This is what we enforce in OpenCTI with audit logging, version history, and full traceability across the data lifecycle, and it extends naturally to agentic workflows.
Second, scope and permissions must be as rigorous for AI agents as they are for human analysts. Role-based access control, bounded autonomy, explicit escalation paths. An agent that can enrich an indicator should not automatically be able to push a blocking rule. The governance model must define, per agent, what it can read, what it can write, and what requires human approval. That's the philosophy behind XTM One's agent architecture — customizable, autonomous within boundaries, but always operating under explicit human-defined guardrails.
Third, the feedback loop must be continuous, not periodic. Traditional governance reviews policies quarterly. Runtime governance monitors agent behavior in real time, measures drift, flags anomalies, and surfaces decisions that fall outside expected patterns. The organizations that will succeed with agentic AI are the ones that treat explainability as an operational requirement — baked into the system design — not a compliance checkbox reviewed once a year.
Erez Tadmor, Field CTO at Tufin
As organizations pursue lower MTTD and MTTR, governance has to evolve into a control loop built for machine-speed change. It is no longer sufficient to log actions after the fact or rely on static approval chains. CISOs need continuous posture evaluation, decision boundaries for what agents can recommend versus execute, validation against actual reachability, and a way to prove that segmentation and policy still hold as the environment changes. That is where explainability becomes operational, not theoretical: teams need to know what changed, what was reachable, what control logic was applied, and why the action was considered safe. In the agentic era, accountability depends on combining automation with continuous connectivity awareness, policy validation, and enforceable guardrails.
Darren Meyer, Security Research, Checkmarx
Automation must be verifiable. Governance needs systems that cross-check important actions, including (but not limited to) automated systems that are dedicated to verification and safety. Detection actions that can’t pass this independent verification should not be automatically acted on. Beyond these process changes, governance must also clearly assign accountability before automated systems are allowed to act, and provide guidance and processes to ensure accountable parties can accurately review the accuracy and performance of those systems.
Niall Browne, CEO and Co-founder, AIBound
As automation drives MTTD and MTTR down, governance must shift from human-reviewed decision logs to real-time, machine-auditable frameworks. Traditional change advisory boards and manual incident reviews cannot keep pace with AI-driven response actions executing in seconds.
Organizations need three things: first, audit trails that capture every automated decision -- what data the model consumed, what action it took, and why -- so that any response can be reconstructed and explained after the fact.
Second, tiered autonomy models where low-risk, high-confidence actions execute automatically while high-impact decisions trigger human-in-the-loop approval before execution. Third, continuous validation through automated red-teaming and drift detection to ensure AI-driven responses remain aligned with policy intent over time.
AI will drive mean time to detection and mean time to response from days, hours, and minutes down to milliseconds. Security teams need to embrace AI to deliver the same millisecond response -- or they will be left behind by adversaries who already operate at machine speed.
Igor Seletskiy, CEO and Founder of TuxCare
Legacy governance was built for human-paced decisions and check-the-box audits. That model fails once AI acts at machine speed. Oversight needs to be treated as a core architectural requirement.
Three shifts are non-negotiable. First, move from post-mortems to continuous observability. Every action should be logged in real time with inputs, logic, and confidence.
Second, accountability must be explicit. Every agent needs a clear owner, a defined policy, and a human responsible for outcomes.
Third, oversight has to scale. Humans can’t supervise thousands of agents, so AI must monitor AI, flagging issues while people set policy and handle exceptions.
The organizations that get this right won’t just automate faster. They’ll be the ones who can clearly explain what their systems did and why.
Ashley Rose, CEO, Living Security
Reducing MTTD and MTTR through automation is the right goal, but speed without accountability doesn't make organizations more secure, it makes failures harder to explain and harder to fix. The governance question isn't how to slow automation down. It's how to make sure that when an automated system acts, you can defend it.
Traditional governance was built for human-speed decisions. Approval chains and after-the-fact audit logs don't work when AI is correlating signals and triggering actions across multiple systems simultaneously. Governance has to operate at the same speed as the automation it oversees.
That requires three things. First, visibility into the decision, not just the outcome, not a timestamp, but documentation of the risk signals that drove the action, the confidence threshold applied, and whether a human was in the loop. Without that, accountability becomes a gap no one wants to own. Second, clear boundaries on what AI can do autonomously. Bounded, reversible actions are well-suited for full automation.
Decisions that affect access, policy enforcement, or cross-system remediation require human confirmation, because a wrong automated decision in that category isn't one mistake, it's hundreds executed before anyone notices. Third, defensibility in plain language — the ability to explain any automated action to a regulator, a board, or an affected employee without reverse-engineering the system after the fact.
Governance built this way doesn't constrain automation. It's what earns the organizational trust that allows automation to scale.
Neeraj Gupta, Chief Technology Officer, Pindrop
Governance needs must evolve with technological advances. Organizations need clear guardrails around where autonomy is appropriate, particularly for high-risk or irreversible actions. At the same time, explainability has to be embedded into the workflow. Security teams need visibility into how and why decisions are made, supported by auditable trails that capture the signals and logic behind each action.
This requires a multilayered approach to security, one that’s deeply integrated across communication channels like voice, video, and messaging. As threats become more sophisticated and AI-driven, point solutions are not sufficient; organizations need systems that can continuously validate identity, intent, and context across every touchpoint. Pindrop just launched a new agentic tool, Pindrop Protect Fraud Assist, to help contact center representatives with this. The platform accelerates investigations while keeping humans in the loop and decisions transparent.
Ultimately, the goal isn’t just faster responses, it’s more trustworthy responses. Without strong accountability, visibility, and layered defenses, automation can just as easily amplify risk as reduce it.
Share this
You May Also Like
These Related Stories

What Separates Real AI Governance From Policy Theater

Why Boards Are Backing Identity-Centric Security


