Governing AI where work actually happens [Q&A]

data governance

Enterprises are rushing to embrace AI copilots and browser-based assistants, but most struggle with governing how employees actually use them. Sensitive data gets uploaded, prompts leak strategy, and risky extensions run unchecked, all outside the reach of traditional network or app-layer controls.

We spoke to Michael Leland, field CTO at Island, to discuss why the UI surface is becoming the most strategic security layer as SaaS and AI copilots flood enterprise workflows.

BN: Why is the presentation layer becoming the most critical control point for enterprise security in the AI era?

ML: Because it’s where people, data, and applications actually meet, most of the risky AI activity that keeps security leaders awake at night happens in the user interface: pasting sensitive text into a chatbot, dragging a spreadsheet into an AI assistant, accepting code suggestions, or installing a ‘helpful’ extension. If your controls don’t see the prompt, the paste, the upload, or the rendered output, you’re governing yesterday’s risks, not today’s.

Traditionally, enterprises relied on the network and application tiers, secure web gateways, proxies, API controls, CASB, and SSE platforms. That approach worked when traffic was more transparent and applications were discrete destinations. But today, much of that traffic is encrypted end-to-end, often over protocols like HTTP/3 and QUIC. At the same time, AI isn’t a single destination. It’s embedded directly into SaaS user interfaces and browser workflows.

The presentation layer is the only place that combines context (who is acting, what data is involved, where they are) with intent (what the user is actually trying to do). It provides the earliest and least disruptive point of intervention before data leaves the screen.

BN: How do today’s AI-driven workflows introduce risks that traditional security stacks were never designed to govern?

ML: Two changes have broken the old model.

First, AI is now embedded, not isolated. AI assistants live inside CRMs, email platforms, office suites, ticketing systems, and developer tools. To the network, they look like any other page element. A filter can’t distinguish between an ordinary save action and an AI assistant ingesting sensitive financial data. That means exfiltration can happen invisibly within tools the enterprise already trusts.

Second, a lot of risk now occurs on screen, not in transit. Copying source code into a chatbot, dropping protected health information into a summarizer, or granting an extension blanket access to browser tabs are all user-level actions. By the time traffic hits the network, the data may already be compromised or wrapped in opaque encryption.

This gets even more complicated when you add the explosion of non-human identities: bots, service accounts, API keys, and autonomous AI agents making their own requests. These create a sprawl of permissions and actions that perimeter defenses and rule-driven IAM systems were never designed to reason about in real time.

BN: CIOs often feel their only options are to block AI, tightly restrict it, or accept the risk. What’s the better path forward?

ML: The better path is to give employees a sanctioned on-ramp to AI -- guardrails that are visible to users and enforceable by policy. Instead of saying ‘yes to everything' or ‘no to everything,’ enterprises can create proportionate controls based on role, context, device posture, and data sensitivity.

That starts with defining access. A sales team might be approved to use a specific model with CRM data. Legal could use an AI assistant with stricter redaction policies. Contractors may only have access to non-sensitive prompts. If someone tries an unapproved tool, they can be automatically redirected to the sanctioned option with a clear explanation of why.

Controls also need to adapt in real time. If an engineer is pasting source code into an AI model, the system might allow it with justification, require extra approval, or block it outright depending on policy. Importantly, those decisions should be explainable, because users need to see why an IT action occurred. That transparency builds trust and reduces friction.

This approach avoids the ‘all or nothing’ trap. Employees get a safe, predictable way to use AI, and enterprises avoid a patchwork battle against shadow AI tools.

BN: What does it look like in practice to enforce policy at the presentation layer?

ML: It means applying policy right where interactions happen: inside the browser. Think of three categories: inputs, outputs, and add-ons.

  • Inputs: Before data leaves the device, the system can intercept keystrokes, clipboard actions, or file uploads. Sensitive content like source code, customer data, or financials can be classified locally. Policy then determines what happens: legal may be blocked from pasting draft contracts into public chatbots, while engineering might need to tie a code submission to a ticket ID before sending.
  • Outputs: Before results render, responses can be inspected and controlled. Patterns like Social Security numbers can be redacted. Warnings can flag when summaries reference unreleased financials. And if the output violates policy, it can be blocked from appearing entirely.
  • Add-ons: Extensions and embedded assistants are another risk surface. Organizations can maintain an allowlist of vetted extensions, constrain their permissions, and detect risky behaviors like screen scraping.

These decisions can also be tied to identity and context. Employees may get broader permissions than contractors. A BYOD device may face tighter rules than a corporate laptop. Every action and decision can be logged with appropriate privacy controls, creating a fine-grained record for audits and investigations.
In short, policy enforcement at the presentation layer governs exactly what the user sees and does, before sensitive data moves across the network.

BN: How does shifting controls to the browser help enterprises balance productivity, compliance, and innovation with AI?

ML: It reframes the problem. Enterprises no longer have to choose between productivity and protection; they can achieve both.

On the productivity side, employees get clarity. They can use AI where it adds value, summarizing documents, drafting communications, and analyzing data, without stumbling into invisible tripwires. Policies are visible, predictable, and tailored to their role.

On the compliance side, controls at the presentation layer offer precision. Organizations can enforce HIPAA, PCI, GDPR, or internal rules based on the actual content and action, not just the traffic destination. Explainable decisions and detailed logs provide defensible evidence for audits and incident response.
On the innovation side, leadership gains confidence to say ‘yes.’ With the proper guardrails in place, teams can experiment with new AI assistants, pilot vetted extensions, and adopt emerging workflows faster. Risk is governed where it’s created, so enterprises don’t have to wait for every SaaS vendor or model provider to solve the problem themselves.

A helpful way to think about it: network and app-tier defenses are still necessary, but they’re no longer sufficient. The browser has become the most strategic endpoint -- where intent can be observed, policies applied, and employees empowered to innovate responsibly. Governed at the presentation layer, enterprises can embrace the promise of AI while maintaining security, compliance, and trust.

Image credit: Michael Borgers/Dreamstime.com

Why Trust Us

At BetaNews.com, we don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

betanews logo

We don't just report the news: We live it. Our team of tech-savvy writers is dedicated to bringing you breaking news, in-depth analysis, and trustworthy reviews across the digital landscape.

x logo facebook logo linkedin logo rss feed logo

© 1998-2025 BetaNews, Inc. All Rights Reserved.