Put Security Where Work Actually Happens: The Presentation Layer

4 min read
(November 5, 2025)

Picture a “bulletproof” enterprise tech stack: phishing-resistant MFA, zero trust network access, DLP, the works. Then an industrious employee installs a free AI tool and uploads a draft earnings release to polish the wording.

The text is processed offsite, feeds the large language model (LLM), and inadvertently leaks results in replies to queries about the company and industry. The stock tanks. No malware, no exploit, just an interaction the security stack never saw.

Scenarios like these are keeping CISOs up at night. Three years into the AI era, these technologies are now inextricably woven into how we work. Still, a mere 17% of companies have safeguards capable of blocking or scanning uploads to public AI tools, leaving the other 83% reliant on training, warnings, or no policies at all. The controls on which security leaders have long relied were never designed to govern how work today actually unfolds.

In light of this, most CISOs believe they have three choices:

  • Try to block AI wholesale,
  • Poke narrow holes for “approved” tools, or
  • Throw their hands up because the space is moving too fast.

None of these are durable strategies.

Instead, CISOs should move governance to the place where users, data, and applications actually meet: the presentation layer. This is the UI surface where prompts are typed, files are dragged, clicks create actions, and results are rendered. CISOs looking for real visibility and enforceable policy for modern work and AI usage won’t find a more effective control point.

Traditional controls to secure work are losing ground

Ever-growing security stacks have primarily enforced policy at the network and app tiers: secure web gateways, proxies, CASB/SSE, and inline inspection. That model assumed that security teams could observe and block traffic in transit. But today, more of that traffic is obscured by end-to-end encryption, HTTP/3/QUIC, or increasingly privacy-preserving transports. What’s more, an ever-growing number of SaaS apps now embed AI assistants that look indistinguishable from the rest of the UI.

Meanwhile, shadow AI has taken the place of yesterday’s shadow IT. Employees install GenAI extensions to their browsers and use risky productivity add-ons. Even well-meaning users can exfiltrate sensitive data with a copy-paste or a drag-and-drop, long before any upstream DLP sees a byte on the wire.

Clearly, risk is no longer confined to “AI sites.” It shows up in a growing number of scenarios:

  • Direct uploads of sensitive content, such as a financial analyst dragging a Q4 forecast spreadsheet into a public chatbot to “summarize for the board;”
  • Embedded assistants inside SaaS, such as an AI email composer tool built into a CRM platform that can scrape sensitive data;
  • Extensions with broad permissions, such as a "meeting notes” extension that has access to emails and schedules;
  • Screen-scraping and screenshot analyzers, such as productivity add-ons that capture periodic screenshots to summarize your day, inadvertently storing proprietary information;
  • Cross-domain oversharing via internal LLMs, such as an AI assistant surfacing draft earnings commentary to an engineer;
  • Behavioral and intent signal leakage, such as a senior leader’s AI queries telegraphing a pending merger or acquisition; and
  • Model training risks, where AI tools retain uploaded content for model improvement, which may resurface later as outputs seen by other users.

Trying to secure all these risks from the back end is a losing proposition. If an organization’s control point can’t see the employee’s prompt, paste, upload, or rendered result, it can’t govern it.

Work’s new control point

By moving controls to where interactions happen, CISOs’ options change dramatically. At the presentation layer, they can apply policy based on real user intent and context—saying “yes” to AI, safely.

We’re past fighting shadow AI one site at a time; with presentation layer governance, CISOs can create a sanctioned onramp for AI with the necessary guardrails to protect sensitive business data. A helpful start is publishing an approved set of tools and models, scope permissions within those tools, and route users there automatically.

They can also direct users to different tools by role and domain. For example, the finance team might be allowed to access a specific LLM with sensitive financial information, while legal could be granted access to a tailored AI assistant. Organizations can allow only vetted extensions and constrain their permissions to least-privilege, as well as set triggers or warnings in response to unauthorized access attempts.

In terms of inputs, CISOs can use presentation layer governance to evaluate keystrokes, clipboard actions, and file uploads before they leave the device. CISOs can enforce rules around PII, source code, or customer data. They can require approvals for certain actions and trigger step-up authorization when risk increases. On the output side, CISOs can set rules to redact or block specific types of content before rendering.

CISOs can also ensure they are capturing interactions – logging prompts, responses, and other UI interactions, with appropriate privacy controls – to enhance compliance reporting and investigation response with fine-grain context.

Across the board, CISOs should publish clear, plain-language user policies. For example, contractors on personal devices may have narrower permissions than employees on managed endpoints, or legal teams might be barred from pasting draft agreements into external LLMs without first redacting sensitive data. Users should clearly understand the “why” (role, risk, or regulatory basis) and the “how” (approved tools, required steps, and support contacts) to boost both compliance and productivity.

Control where it matters most: the presentation layer

It’s not enough to embrace the promise of AI to transform work and redefine the role of the browser. Without a deliberate strategy for governing the presentation layer, that vision is shortsighted at best and dangerous at worst.

Moving AI controls to the presentation layer gives CISOs what upstream controls cannot: direct visibility into intent and the ability to intervene before data leaves the screen. From this vantage point, CISOs can enforce HIPAA, PCI, GDPR, and other strict regulations without having to re-architect every app; generate defensible records for audits and investigations; and stop leaks in their tracks, before they become a flood downstream.

In short, they can govern AI use within the context of how people actually work today. When most email is webmail, most apps are SaaS, and most AI is invoked in the browser, the presentation layer is the work surface.

For too long we’ve assumed the UI was a passive display. In reality, the presentation layer’s proximity to the user provides CISOs the richest context and the earliest, least disruptive intervention and point of enforcement. When CISOs turn their attention toward the presentation layer, they can embrace all the promises of AI – intelligence, speed, productivity, and more – while upholding their own promise to protect organizational security and compliance.