What Separates Real AI Governance From Policy Theater

11 min read
(March 31, 2026)
What Separates Real AI Governance From Policy Theater
19:39

As organizations move from experimenting with AI to embedding it across workflows, policy is becoming one of the clearest indicators of governance maturity. In the Cyber Security Tribe Annual Report, 70% of respondents said their organizations already have policies related to AI in place, while a further 27% said those policies are still a work in progress. That level of activity suggests most organizations now recognize AI as a business and security issue, not simply a technology issue. The harder question is whether these policies are materially reducing risk or merely signaling that the organization has acknowledged it.

This article forms part of Cyber Security Tribe’s wider editorial series based on findings from the annual report and expert conversations held at RSAC 2026 in San Francisco. Across the series, senior cybersecurity leaders and practitioners were asked to respond to themes emerging from the report, including agentic AI, identity-centric security, quantum computing, employee concerns, and governance. For this article, we asked a central question for security and risk leaders: "What differentiates a policy that genuinely mitigates enterprise risk from one that exists primarily to demonstrate that the organization has acknowledged AI risk?"

The perspectives that follow explore the gap between written policy and operational control. They highlight the importance of enforceability, visibility, ownership, technical safeguards, and accountability. They also examine whether AI governance is embedded where work actually happens, across workflows, access controls, approved tools, data handling, monitoring, and user behavior, or whether it remains disconnected from day-to-day practice.

This article highlights how experts are distinguishing governance that shapes behavior, constrains risk, and supports safe adoption from governance that remains static, symbolic, and difficult to enforce in practice across the enterprise.

Thought leaders that contributed to the article include:

 

 


Rock Lambros, Director of AI Security and Governance at Zenity

The first question I ask about any AI policy is simple: can you show me where it lives in your control framework and who was disciplined for violating it last quarter? Silence is the answer most of the time. That’s compliance theater.

A real AI policy passes three tests. First, it is enforceable. Standards and SOPs exist beneath it that explain exactly how people comply, not just that they must. Second, it maps to the AI the organization actually uses, not a hypothetical inventory drafted in a conference room. Third, it is audited on a regular cycle instead of written once and forgotten.

Many leaders miss that AI governance is not traditional cybersecurity governance with a new label. Conventional security governs infrastructure and code. AI governance must account for probabilistic systems whose behavior changes based on input, context, and model updates outside the organization’s control. The attack surface is dynamic. If policy does not reflect that reality, it becomes outdated almost immediately.

 


Kevin Paige, Field CISO at ConductorOne

The difference is enforceability. A policy that mitigates risk has three characteristics: it defines specific behaviors and boundaries, it has technical enforcement mechanisms behind it, and it has consequences for non-compliance. A policy that exists for optics has none of those — it's a document that says "use AI responsibly" without defining what responsible use means or how the organization would know if someone violated it.

I see this constantly. Organizations publish an AI acceptable use policy that reads like a mission statement. It talks about ethical use, responsible innovation, and alignment with company values. Then you ask them: how do you know which AI tools your employees are using? What data is flowing into those tools? What happens when someone violates the policy? The answers are usually "we don't," "we're not sure," and "nothing."

A policy that genuinely mitigates risk starts with an inventory of approved AI tools and a process for evaluating new ones. It defines data classification requirements — what data can and cannot be used with AI systems. It specifies access controls for AI agents — what they can access, under what conditions, and with what level of autonomy. And critically, it has monitoring behind it. If an employee connects an unapproved AI tool to company data, the security team knows about it.

The 70% figure tells me most organizations have acknowledged the risk. The real question is how many of that 70% could actually detect and respond to an AI policy violation today. I'd guess the number is significantly lower.

 


Ankur Sheth, Senior Managing Director, FTI Consulting’s Cybersecurity practice

As the report highlights, organizations are rapidly formalizing their approach to artificial intelligence (AI). This pattern is common with fast-moving technologies: adoption accelerates first, and formal policies, processes, and controls follow. Yet not all AI policies are created with the same level or rigor or purpose.

Policies that truly mitigate enterprise risk are actionable, integrated into business processes, and reinforced with technical safeguards. They clearly define approved use cases, data‑handling requirements, and integration with other security tools. They also establish ownership and review cycles to keep pace with evolving AI capabilities and regulatory expectations.

In contrast, policies created merely to “show awareness” of AI risk tend to be high‑level statements without enforcement mechanisms, user education, or alignment to day‑to‑day operations. These policies may satisfy governance optics but do little to prevent actual data exposure or inappropriate model usage and do not reduce risk to the organization.

 


Chas Clawson, VP of Security Strategy, Sumo Logic

What we see is executives pressuring teams to deploy AI technology without providing the guardrails and governance it requires, partly because the technology is so new and breaks many of the deterministic and predictable ways IT systems traditionally behave.

So the hard truth is that many companies are deploying AI under competitive pressure before the guardrails are mature. The smartest path is not waiting for regulation to give you all the answers. Doing so, you risk obsolescence. Apply proven security first principles now and stay flexible as standards catch up. The best teams can do is treat AI like any other high risk system: least privilege, zero trust, full visibility with logs end-to-end, identity-based attribution for human and agent actions, and controls that can evolve fast as the standards catch up.

In short, smart organizations are those that apply the same battle-tested security principles they’ve used for decades, stay agile, and pull ahead. Regardless, everyone should be prepared for a very rough ride as we learn from our mistakes and best-practices are formed over the next 6-12 months.

 


Willie Tejada, GM & SVP, Aviatrix

A meaningful policy changes architecture and behavior. A performative policy changes documentation.

AI systems are not static tools. They interact with APIs, databases, SaaS platforms, and cloud workloads at machine speed. Here’s the test: if your AI agent can traverse your entire cloud network without hitting a single checkpoint, your policy is a press release, not a control.

Effective AI governance enforces workload identity, monitors east-west traffic across the cloud network, and controls egress pathways where data exfiltration actually occurs. If you cannot constrain blast radius in minutes, you have acknowledged risk, not mitigated it.

 


Shashi Kiran, Chief GTM Officer, Nile

Risk is directly related to visibility, contextual knowledge and action. Having a policy and being able to apply it with intent in a timely manner is very important to minimize risk. As workflows become more autonomous, more systems are leveraging AI under the hood to be efficient and scale, as well as responding to inbound incidents triggered by AI. Applying policies with and for AI are two different things, and a majority of the organizations are learning both.

Likewise applying AI on greenfield environments vs brownfield are two different things. Most organizations are challenged to apply AI on complex infrastructure stacks or workflows, as the integrity of the underlying system may not be sound and AI acts as a force multiplier that can magnify the good as well as the bad. So I expect this to be a work in progress for quite some time, as mid-to-large organizations with complex workflows take a conservative approach to define and apply policies at scale.

 


Pieter Danhieux ,Co-Founder & CEO, Secure Code Warrior

There is a vast difference between pulling together a set of general, intentionally vague policy directives and actually sitting down with subject-matter experts, security and engineering leaders, and scoping meaningful guardrails that positively impact the business.

Organizations that have downplayed or outright banned AI use are ignoring the shift in how people are now working, and this will not prevent clandestine use, nor the associated risk, within teams. Software engineers are most likely to be integrating AI tools into their existing workflows, from AI pair-programmers to full swarms of agentic AI coders, orchestrated by a human engineer. CISOs cannot afford to sit back and see what happens to software quality, security risks, and the overall expansion of their attack surface. They will need to evaluate this agentic coding tech stack (LLM, MCP, Agent) and the purposes for which it is being used.

Waiting for government legislation to mandate preventive security policies is a critical strategic error, and half-hearted documentation with vague rules like “don’t share sensitive information” or “don’t connect sensitive repositories” isn’t going far enough. Be intentional, with directives like:

  • Work with IT to determine a list of approved tools, all of which should be subject to the same scrutiny as any other vendor;
  • Utilize security guardrails and observability tech for every LLM in use, especially for engineering tasks;
  • Train every employee in AI security basics, demonstrating what sensitive data means in your organization, which applications may share data, and why role-based security awareness is so vital.

 


Brian Fox, Chief Technology Officer and co-founder, Sonatype

A real AI policy changes behavior. A performative one changes documentation.

If a policy lives in a slide deck or an acceptable use memo but never shows up in the build pipeline, access controls, or deployment workflow, it is not reducing risk. It is signaling awareness. That may satisfy an internal governance requirement, but it does not materially change exposure.

What actually mitigates risk is when policy becomes enforceable in the systems where work happens. That means controlling what data AI tools can access, what models are approved for which use cases, what code or dependencies AI systems are allowed to introduce, and what validation has to happen before any AI-generated output reaches production. In practice, this is less about writing better principles and more about embedding better controls.

The mistake many organizations make is treating AI governance as separate from software governance. It is not separate. If AI is generating code, suggesting packages, or influencing production decisions, then it is operating inside the software supply chain and should be governed accordingly.

The difference is straightforward: a performative policy says, “Be careful.” A real policy makes unsafe behavior harder, detectable, and accountable by default.

 


Kory Daniels, CISO at LevelBlue

A policy that mitigates risk is operational, not aspirational. The creation of a policy has value for compliance, audit, and staff, but the value comes from configuring human and technological behaviors. An AI policy goes beyond high-level principles and clearly defines how AI can be used, where sensitive data is restricted, and who is accountable for oversight. The policy is embedded into workflows, integrated with identity controls, data classification, and monitoring, so enforcement happens automatically, not manually. This is the only way cyber programs can measure policy effectiveness through reporting, which provides important feedback if the policy is achieving the intended business results.

In contrast, performative policies tend to be static documents. They signal awareness of AI risk but lack enforcement mechanisms, technical controls, or alignment with how employees actually use AI tools day-to-day.

The real differentiator is whether the policy drives behavior. If it’s paired with enablement, like approved tools, employee training, and visibility into usage, then it reduces risk. If it’s disconnected from the reality of how AI is being adopted across the business, it simply creates a false sense of security.

 


Erez Tadmor, Field CTO at Tufin

AI governance becomes meaningful when it helps the organization operate safely in an environment that is changing faster and with less direct human oversight. A credible policy gives teams clear rules for how AI can be used, what data can be exposed to it, where human review is required, and how accountability is maintained as AI becomes embedded across workflows and operations. In practice, the difference comes down to whether the policy can actually shape behavior at scale. If it is disconnected from technical controls, approval paths, and day-to-day operating decisions, it may acknowledge the risk without materially reducing it.

 


Darren Meyer, Security Research, Checkmarx

When you see AI policies that are making an attempt to balance control with the realities of rapid adoption, that’s generally a good sign that an organization is headed in the right direction. Good policy tends to be enforceable by technical controls, backed by organizational investment in those controls, and focused on incentivizing low-risk behaviors.

By contrast, poor AI policies will tend to take an extreme position. Either introducing little control beyond procedural niceties, or seeking to ban all but very narrow and tightly controlled uses of AI systems.

 


Niall Browne, CEO and Co-Founder, AIBound

When any new technology challenge emerges, organizations follow a predictable maturity curve: policy first, then training, followed by technical controls, and finally audit and compliance programs to validate those controls are working. The fact that 70% of organizations have an AI policy in place -- with another 27% in progress -- tells us exactly where we are on this journey: step one. Having a policy is the first realization of the long road ahead.

Over the next one to two years, companies will naturally evolve through each of these stages, building training programs to educate their workforce, deploying technical controls to secure AI usage and data flows, and ultimately establishing audit and compliance frameworks to measure effectiveness.

A policy that genuinely mitigates risk will be one that evolves alongside this maturity curve -- not a static document, but a living framework that drives actionable controls, defines clear ownership, and is continuously tested against real-world AI use cases. This journey will require a huge amount of organizational disruption as companies incorporate secure AI as part of their broader transformation.

 


Guru Sethupathy, Head of AI Governance at Optro

For decades, organizations have been managing technology risk through structured policies and processes. This model has worked well because most enterprise technologies behaved predictably and transparently.

AI, however, adds additional complexity. AI can be a complex technology system or a simple predictive model. It can be the entire technology or a small feature in a product. The same AI tool can be used in a multitude of ways that vary from the mundane to the very risky. And every single person in an organization will touch and use AI.

Traditional technology governance policies typically sit in documents that employees rarely revisit. Training often happens months before someone encounters a real-world scenario involving AI. By the time an employee is deciding whether to paste sensitive information into a generative model or rely on an automated analysis, those guardrails are far removed from the moment the decision is made. Static point-in-time controls, training, and enforcement will be deeply insufficient. This is why traditional governance approaches will fall short with AI.

When technology becomes embedded directly in the moment decisions occur, governance also needs to be embedded in every step. AI policies and processes need to govern data, the behaviors of AI systems, and the behaviors of humans interacting with these systems. Additionally, AI policies and processes need to recognize that risk doesn’t stop at your organization’s boundary. There’s an increasingly complex AI supply chain, so organizations need to understand and manage the risks across that supply chain.

 


Matt Kunkel, CEO and Co-Founder of LogicGate

The most effective policies balance a full understanding of how an organization will use AI with clearly defined guardrails for data access and security. Enterprises want to unlock the full ROI of their AI tools, but the breakneck pace of AI innovation means many are effectively building the plane as it’s flying. Even the goalposts for what good AI governance programs look like are constantly moving as new use cases (and new risks) emerge on a seemingly daily basis. Many business leaders still don’t know where to insert the appropriate guardrails for safe AI adoption, and some see governance as a hindrance to progress. As such, they view AI policies as a check-the-box exercise without seriously considering how exposure to AI-related threats can lead to cybersecurity vulnerabilities, compliance violations, regulatory fines, and reputational damage with partners and customers.

Enterprises need to view governance as a top-priority pillar in conjunction with AI investment. Genuine governance policies assess where an organization stands in its AI journey, how it plans to use AI across operations, and what adoption means for its overall risk profile. They also require clear, transparent boundaries for enterprise and customer data usage and which datasets AI tools can access, ensuring sensitive, confidential data is properly protected. When an AI governance policy is implemented properly, enterprises not only mitigate exposure to threats and damages, but also make their customers feel comfortable knowing they’re deploying AI responsibly. Strong AI governance isn’t a roadblock—by establishing clear guidelines and eliminating uncertainty, it serves as an effective business enabler that empowers employees to leverage AI more confidently.