
As generative AI (GenAI) continues to reshape the enterprise landscape, businesses face a complex but critical balancing act: how to drive innovation and efficiency without opening the floodgates to risk. AI governance is not simply a ‘nice-to-have’, it’s the foundation of responsible, scalable and safe adoption.
That means the narrative around AI governance must evolve beyond compliance and control, says Justin Brooks, Vice President, United Kingdom and Ireland at Zscaler.
“AI governance operates as the foundation for responsible innovation. It is not about holding back progress or creating barriers. It is about empowering organizations to innovate safely and with confidence. When implemented effectively, AI governance ensures enterprises can realise transformative business outcomes while managing risks thoughtfully.”
From restriction to enablement
Business leaders are placing increasing pressure on CIOs and CISOs to leverage AI internally and externally. From product development to operational automation, the promise of AI lies in increased efficiency, cost reduction and speed to market. Yet, this must be tempered with protection.
“CEOs are saying, ‘We need to use AI to be more efficient.’ But they also want to avoid the kind of breaches or regulatory surprises that damage reputation and shareholder value,” says Brooks.
This dual mandate makes AI governance not a barrier but an enabler, a structured way to implement AI with less exposure to risk. “Responsible AI begins with visibility,” says Brooks. “Without knowing where models are used and how data flows through them, there can’t be reasonable, responsible innovation.”
The shadow AI threat
One of the biggest risks today is shadow AI, employees using unapproved AI tools outside official governance frameworks. This introduces not only data leakage and IP loss risks but also uncontrolled exposure to third-party tools that may not meet enterprise security standards.
“When proprietary data or code is uploaded to external tools, organizations lose visibility and control. It is not just tools like ChatGPT that need attention. Other AI models may introduce risks such as foreign surveillance or regulatory exposure that extend beyond initial assumptions. If businesses do not provide secure governed frameworks for using AI, employees will turn to alternatives that leave the company vulnerable to significant risks.”
Put simply, if IT leaders don’t provide secure, governed ways for employees to use AI, those same employees will find their own, less safe workarounds.
Building a practical AI governance framework
For many organisations, especially those early in their journey, building AI governance can feel overwhelming. The key is starting small but strategically,with a focus on people, process and technology.
Zscaler recommends these practical steps:
- Inventory AI Use and third-party integrations: Map all internal and external AI tools currently in use. This includes sanctioned platforms as well as shadow AI that may have crept into daily workflows.
- Define data classification and handling policies: Determine what kinds of data (IP, customer information, source code) can be used in AI workflows – and under what conditions.
- Apply zero trust principles: Limit access to AI tools, APIs and datasets only to authorised users and systems. This aligns AI governance with broader cybersecurity best practices.
- Continuously monitor AI activity: Ongoing inventory and detection of AI model usage are crucial to identify rogue tools and misuse.
- Develop incident response plans specific to AI: Prepare for model misuse, toxic data combinations, or adversarial attacks with AI-specific mitigation protocols.
- Invest in employee education: Governance must be understood, not just enforced. Training should explain why certain practices are in place, not just what the rules are.
Cross-functional ownership is key
Effective AI governance isn’t just the domain of the CIO or CISO. It requires collaboration across legal, compliance, data and business units.
“We need alignment across all parts of the business,” says Brooks. “Everyone has a role to play in governing AI, because its impact spans across every layer of the enterprise.”
This cross-functional approach ensures that AI is not only safe but also aligned with corporate goals and ethical standards.
Security as a business value enabler
A common boardroom disconnect is that security is seen as a cost centre, something that delivers nothing except peace of mind. But that view is increasingly outdated, particularly when it comes to AI. “A humorous way to look at it is that the sign a security company is doing its job is that nothing happens,” Brooks jokes.
“But it’s more than that. Companies aren’t just avoiding risk, they’re accelerating transformation, moving to the cloud, and opening up new ways of working, with security built in from the start.”
When communicating with boards, IT leaders must reframe governance as a business enabler by focusing on three outcomes:
- Revenue growth: Can governance support expansion into new markets or faster product releases?
- Cost reduction: Can AI be deployed securely to reduce operational expenses or improve efficiency?
- Risk mitigation: Will the business avoid fines, data breaches, or reputational damage by embedding AI responsibly?
This framing aligns governance with shareholder interests, positioning it as a strategic lever, not an overhead.
From AI security to AI resilience
The key, says Brooks, is shifting from a purely defensive posture – ramping up AI security – to building AI resilience.
“Building AI resilience means going beyond reactive security. Real-time monitoring, adaptive controls and smart governance across platforms like AWS and Azure enable confident, risk-aware innovation.”
This is especially important as organisations adopt agentic AI — autonomous systems capable of interacting independently within applications and workflows. As these models open files, send data and trigger actions, new governance layers are needed to manage them effectively within the enterprise.
Preparing for the future
With quantum computing on the horizon and GenAI and Agentic AI rapidly advancing, today’s governance models must be built to evolve. “We’ve never seen innovation at this speed,” says Brooks. “Companies need to prepare not just for today’s risks but for what’s coming in five years.”
For business leaders, the message is clear: AI governance doesn’t apply brakes to innovation – it is instead the guardrails that make high-speed transformation possible.
“The era of intelligent systems is here,” says Brooks. “And the companies that will succeed are those that can innovate quickly and responsibly. Governance gives you the confidence to move fast without losing control or needing to slow down the adoption of tech.”
To harness GenAI’s full potential, businesses must treat governance not as a constraint but as a catalyst. By embedding visibility, cross-functional collaboration and zero trust principles, organisations can balance innovation with control – enabling AI to drive meaningful outcomes, safely and at scale, in an increasingly fast-moving landscape.
For more information, please visit zscaler.com

As generative AI (GenAI) continues to reshape the enterprise landscape, businesses face a complex but critical balancing act: how to drive innovation and efficiency without opening the floodgates to risk. AI governance is not simply a ‘nice-to-have’, it’s the foundation of responsible, scalable and safe adoption.
That means the narrative around AI governance must evolve beyond compliance and control, says Justin Brooks, Vice President, United Kingdom and Ireland at Zscaler.
"AI governance operates as the foundation for responsible innovation. It is not about holding back progress or creating barriers. It is about empowering organizations to innovate safely and with confidence. When implemented effectively, AI governance ensures enterprises can realise transformative business outcomes while managing risks thoughtfully."