
Your security operations centre (SOC) analyst has just seen it: an employee typing into a workplace AI tool: “I can’t cope; I haven’t slept properly in weeks.”
What now? Escalate to HR? Notify Legal? Pretend they didn’t see it? That uneasy pause between visibility and responsibility is where the real AI risk lives. Not in the model, but in the data: what enters these systems, where it travels, who sees it and what they’re obligated to do next.
As AI rolls into every function, one truth is becoming undeniable: AI itself isn’t the risk. Data is. “Without data, AI is just an expensive search engine,” says James Tucker, head of CISO EMEA at Zscaler. “The technology is secondary; the real problem is how information is accessed, shared and governed.”
From inadvertent data disclosures in generative AI prompts to the rapid spread of shadow AI, risks to privacy, IP and compliance are multiplying faster than most control frameworks can adapt.
The CISO governance dilemma
Leaders face three possible paths – and none offers a sustainable answer.
First, allow everything: productivity spikes – until your source code, pricing models or M&A decks appear in someone else’s training set.
That uneasy pause between visibility and responsibility is where the real AI risk lives
The second choice is to block everything: the data doesn’t leave… officially. But in reality, staff will photograph screens, forward files to personal accounts or paste content into unsanctioned tools to get work done.
Or third, outsource judgment to a vendor: turn on whatever is bundled with your productivity suite and hope their defaults match your risk appetite. None of these are strategies – they’re compromises.
“The underlying issue is incentives,” says Tucker. “Employees and their managers both want faster output, broader capability and better results. If using an unapproved AI tool helps them leave the office in time for the school run, many will do it.”
Zero trust: from security project to governance philosophy
This is why zero trust is fast becoming the cornerstone of AI governance. At its core, zero trust is a policy engine for connections: every user, device and session is evaluated continuously based on identity, context and risk – not on whether it happens to sit on the “right” network.
Applied to AI, it provides three critical capabilities:
Visibility
See which AI apps are being used, by whom, from where, and for what tasks. This isn’t surveillance; it’s understanding data flows so you can enable the good and contain the risky.
Enforcement
Move beyond the allow/deny binary. Permit approved tools for specific roles, use step-up authentication, and guide behaviour with coaching pages that explain permitted use.
Continuous classification
As new tools appear daily, static rules can’t keep up. Destinations and content must be classified dynamically.
It’s critical to treat zero trust as a philosophy, not a project. Projects end; governance doesn’t. The goal is not to slow the business, but to install the brakes on a race car so you can steer confidently at speed.
Uncomfortable new liabilities
AI also exposes duty-of-care questions that boardrooms can no longer ignore. Tucker points to real cases where prompt logs surfaced signals of distress, self-harm or harassment.
“If your controls capture those prompts, what’s your responsibility? Who is authorised to view them? How is personal data obfuscated? When must HR or Legal be involved? And what happens if they aren’t?,” he asks.
These aren’t anomalies – they’re predictable outcomes of putting conversational AI into the workplace and routing it through enterprise security stacks.
Visibility is therefore imperative. Zero trust demands it. Given the choice between line-of-sight on attacks or institutional blindness, the choice is obvious. But visibility means we will see more than we expect, including sensitive human signals, so enterprises need clear frameworks for what happens next.
That includes role-based access to prompt logs, formal escalation paths and a cross-functional team – part Security, part HR, and grounded in ethics – empowered to act when wellbeing intersects with security telemetry.
A practical playbook for the C-Suite
Treat AI use as data movement, not magic: Define what can enter AI systems, under what contexts and at what sensitivity. Draw clear red lines – regulated data, trade secrets, confidential financials – and explain the “why,” not just the “no.”
Instrument for visibility before enforcing anything: “You can’t govern what you can’t see,” says Tucker. Make your secure access layer the default route to the internet to reveal which tools are in use and why.
Replace blanket blocks with role- and risk-based controls: Blocking is a reaction; governance is a strategy. Use coaching pages and light isolation to guide behaviour without crushing productivity.
Classify continuously: Maintain a live register of AI tools ranked by business value and risk. Update quarterly – or more frequently for regulated sectors.
Build an escalation framework for sensitive prompts: If you log prompts, decide who can view them, how user data is masked, and when HR or Legal must step in. Train analysts not only on when to look, but on when not to.
Communicate like a product team: Governance fails in silence. Publish FAQs, highlight positive use cases, and make it easy to request new tools. “Shadow AI isn’t rebellion,” says Tucker. “It’s your users telling you what they need.”
Measure value, not just compliance: Track productivity gains alongside risk reduction. Governance that only shows up as friction will be ignored or bypassed. Governance that accelerates safe use will be adopted.
Governance as a strategic enabler
Handled well, AI governance becomes a competitive advantage: it lets organisations say yes, safely and quickly. The payoff isn’t just fewer leaks or regulatory headaches; it’s a more adaptive, transparent data-security model aligned with where the business is headed.
It’s not simply a case of allowing or banning AI. Instead, treat it as a continual pattern of data exchange that deserves the same rigor you apply to finance or supply chain. Zero trust gives you visibility, enforcement and continuous classification to supply the operating rhythm. And when new ethical and legal questions appear – as they will – you’ll respond to them with policy, not panic.
To find out more please visit www.zscaler.com/
Your security operations centre (SOC) analyst has just seen it: an employee typing into a workplace AI tool: “I can’t cope; I haven’t slept properly in weeks.”
What now? Escalate to HR? Notify Legal? Pretend they didn’t see it? That uneasy pause between visibility and responsibility is where the real AI risk lives. Not in the model, but in the data: what enters these systems, where it travels, who sees it and what they’re obligated to do next.
As AI rolls into every function, one truth is becoming undeniable: AI itself isn’t the risk. Data is. “Without data, AI is just an expensive search engine,” says James Tucker, head of CISO EMEA at Zscaler. “The technology is secondary; the real problem is how information is accessed, shared and governed.”