
Across industries, organisations are racing to embed AI into their operations in pursuit of speed, efficiency and competitive advantage. But as AI adoption accelerates, a familiar pattern is re-emerging, where speed begins to outpace governance.
New research from Zscaler, The Ripple Effect: A Hallmark of Resilient Cybersecurity, suggests organisations are once again running ahead of their own control frameworks. While 42% of organisations are testing agentic AI and 34% have already deployed it, half have done so without governance guardrails in place.
That rush to adopt can result in what the research calls the ‘watermelon effect’. This is where slow-moving security frameworks coupled with reactive investment create a ‘green’ effect on the surface. However, scratch below and things turn red where emerging risks have been overlooked.
The result is a widening gap between business ambition and operational resilience – one that creates new, fast-moving security risks.
A familiar pattern, repeating faster
Many business leaders remember the early days of cloud adoption, when teams moved quickly to capture value while security scrambled to catch up. In many cases, organisations that hesitated or delayed cloud adoption saw business units bypass central IT altogether, creating waves of shadow IT and forcing leadership to react once competitive opportunities were already being missed.
Resilience doesn’t happen by accident. It has to be designed from the start
That experience has left what some describe as “cloud PTSD” (post-traumatic stress disorder) – not simply fear of risk, but a determination not to fall behind again. The result can be a strong push from leadership to move quickly on the next technological wave, sometimes before there is a clear business case or governance framework in place.
That urgency is now playing out in AI adoption, where the same dynamic can create new forms of shadow usage – often referred to as “shadow AI” – as teams experiment with tools outside formal oversight.
“There’s a real pull from boards to adopt AI quickly,” says Martyn Ditchburn, CTO in residence at Zscaler. “But AI is not a single thing. Predictive, generative and agentic AI all behave very differently. If you don’t understand which wave you’re in, you can easily believe you’re in control when you’re not.”
Before organisations can govern AI effectively, they first need to understand what types of AI are already in use across the business and which stage of adoption they are actually in.
Therefore, understanding how AI risk evolves is essential for setting the right controls.
Predictive AI, the earliest wave, focuses on analyzing large datasets to identify patterns and forecasts. From a security perspective, this is largely an extension of existing data protection challenges: controlling access to sensitive information and ensuring it is not misused.
Generative AI raises the stakes further. The rapid commercialisation of public Gen AI tools means employees can access powerful models instantly, often without fully understanding the security implications. As these models ingest and generate vast volumes of content, the risk of sensitive data leakage increases – particularly when proprietary information is entered into public systems. In this context, data loss prevention (DLP) becomes critical, helping ensure regulated or confidential data does not leave the organisation or become embedded in external models.
Agentic AI, however, changes the game. These systems don’t just analyse or generate data; they act autonomously, connecting to business systems and executing tasks at machine speed. This is where AI delivers real competitive advantage – but also where unmanaged risk escalates quickly.
“Agentic AI is where automation turns into action,” says Ditchburn. “That’s powerful, but it means agents can influence systems that were never designed to interact before. Without guardrails, the blast radius grows very fast.”
From reactive security to proactive resilience
The good news is that securing agentic AI does not require radical new thinking, but it does require earlier, deliberate action.
The priority is visibility – organisations cannot govern AI they cannot see. The Ripple Effect research shows that visibility remains a major challenge, with 60% of IT leaders struggling to trace how and why data moves through their networks, creating blind spots that attackers can exploit. Blocking AI outright only drives shadow usage. Enabling sanctioned experimentation creates the transparency needed to understand real risk.
For executives, the answer is not to slow AI adoption, but to shape it
The second is segmentation. Agentic systems should never have unrestricted access to the enterprise. By deliberately defining which agents can interact with which applications, organisations can limit exposure if something goes wrong.
“Many teams jump straight to micro-segmentation and get stuck,” says Ditchburn. “Macro-segmentation alone can dramatically reduce risk. Don’t let perfection be the enemy of progress.”
The third control is behavioural analytics. If an AI agent suddenly behaves in an unexpected way – accessing new systems, escalating privileges, or moving data differently, for example – that deviation is a warning sign. Security teams already use anomaly detection for users; the same principles apply to non-human actors.
In essence, the guardrails that have long secured human users can be extended to AI agents. This is an evolution, not a revolution – but leaders must recognise that the scale and speed of agentic systems will demand security controls that operate at hyperscale.
Together, these measures move security from reactive defence to proactive resilience.
Supply chain multiplies risk
Of course, AI risk does not stop at the enterprise boundary. As organisations automate workflows across suppliers, manufacturers and logistics partners, AI-driven processes increasingly span multiple entities.
The Ripple Effect report highlights a critical vulnerability: 68% of organisations now rely more heavily on third parties, yet adoption of third-party risk controls remains below 50%. In an AI-enabled ecosystem, this creates ideal conditions for shadow AI to flourish beyond the organisation’s visibility.
Consider a manufacturer using agentic AI to coordinate on-demand production and delivery. Design data flows from the brand to a factory, then to a logistics provider. The organisation remains accountable for that data – even when it sits outside its walls.
“The business is still the custodian,” says Ditchburn. “Security has to extend across the supply chain, not stop at the perimeter.”
What leaders should do now
For executives, the answer is not to slow AI adoption, but to shape it.
Start by creating safe, sanctioned environments for AI use. Funnel teams toward approved tools to reduce shadow activity and improve visibility. Use real data on AI usage to inform board discussions, rather than relying on assumptions.
Next, agree on clear, principle-based guardrails: which AI tools are acceptable, where data can flow, and which systems agents can access. These decisions are far easier to make before adoption scales.
Finally, treat AI as part of a longer technology curve. Quantum computing is already emerging on the horizon, and business leaders face a familiar choice: apply the lessons of cloud and AI early, or risk being caught off guard once again as adoption accelerates. Organisations that invest early in visibility, segmentation and behavioural insight will be best positioned to adapt as the technology matures.
“AI is an opportunity, not a threat,” maintains Ditchburn. “But resilience doesn’t happen by accident. It has to be designed from the start.”
For more information please visit www.zscaler.com
Across industries, organisations are racing to embed AI into their operations in pursuit of speed, efficiency and competitive advantage. But as AI adoption accelerates, a familiar pattern is re-emerging, where speed begins to outpace governance.
New research from Zscaler, The Ripple Effect: A Hallmark of Resilient Cybersecurity, suggests organisations are once again running ahead of their own control frameworks. While 42% of organisations are testing agentic AI and 34% have already deployed it, half have done so without governance guardrails in place.
That rush to adopt can result in what the research calls the ‘watermelon effect’. This is where slow-moving security frameworks coupled with reactive investment create a ‘green’ effect on the surface. However, scratch below and things turn red where emerging risks have been overlooked.