
New research has revealed the extent to which the zero trust framework, developed to reduce risk across enterprises, is under pressure as AI adoption outpaces security governance. In 65% of organisations, zero trust controls cannot secure non-human identities (NHIs), including new agentic AI systems.
AI agents offer clear advantages, from generating content and retrieving information, to triggering downstream actions. However, they too often run unsupervised and without guardrails, increasing the risk of data leaks, credential compromise and wider operational disruption. The fact that they are operating with fewer checks than their human colleagues should ring alarm bells. According to Netskope’s AI Risk and Readiness Report 2026, which surveyed 1,253 cybersecurity professionals, 56% of enterprises acknowledge exposure to agentic AI risk. This is largely because AI tools operate autonomously in shadow mode, with organisations often only discovering what an agent has done after the action is complete.
Shadow AI risk grows
The scale of adoption is already significant: some 24% said agents were in limited production within their organisation, 9% had ungoverned agents operating at scale handling core business logic, and 23% suspected there to be shadow agentic AI deployments in operation, unknown to internal IT. In fact, 32% admitted that they have no visibility into agent actions at all.
“Organisations need a better understanding of the underlying technology and greater visibility into what they are giving up when they use agentic AI,” says Netskope’s CISO, James Robinson. “Too many enterprises are relying on legacy security models to secure this new technology.”
The principles of zero trust must evolve to account for non-human identities
AI agents often have broad access across enterprise systems and almost none of it can be meaningfully intercepted. They can also be prompted to perform unintended actions – for example, reacting to malicious prompt injections embedded in open-source software or external data sources. In such cases, an agent may follow instructions that appear legitimate but have been manipulated to trigger harmful behaviour.
Once an agent initiates a harmful action, only 9% of organisations can intervene before it completes, according to Netskope. Of that figure, 24% can block some actions but not all, 35% only identify them in logs after completion, and 32% have no visibility at all.
This lack of control highlights a fundamental shift. So what is the future of the zero trust framework in a world where 91% of organisations cannot stop an agent before it acts?
Rethinking zero trust
Zero trust was built around a human user – with a device, location, behavioural pattern and risk score. By contrast, an AI agent has a credential, a defined scope and a task. It does not behave like a person, nor does it follow predictable patterns of intent. The principles of zero trust must therefore evolve to account for non-human identities and machine-to-machine interactions, where actions are faster, more opaque and harder to audit.
The Netskope study shows these risks are already materialising. Some 37% of respondents report AI agent-related operational issues in the past year, with 8% resulting in outages or data corruption. These are not theoretical concerns but active failures already affecting production environments.
Simply banning the use of agentic AI is not the answer. It risks driving adoption underground, making it harder to govern and contain when problems arise. As with previous waves of technology entering the workplace, shadow usage is often more dangerous than controlled deployment.
Some 62% of organisations already apply zero trust principles to AI security in some form, largely based on CISA’s five pillars: Identity, Devices, Networks, Applications and Workloads and Data. While these remain relevant, the Identity pillar is under particular strain as legacy architectures are applied to non-human identities that do not fit traditional models.
The challenge becomes more complex when individuals deploy multiple agents. Should each agent have its own identity, permissions and audit trail, or inherit that of the human directing it – for example, when answering emails, accessing documents or responding to queries? The wrong approach risks either over-privileging agents or losing accountability entirely.
Defining accountability
Ultimately, says Robinson, enterprises must define how much risk they are willing to tolerate.
This requires employees to understand their responsibilities, and organisations to clearly define acceptable use. It also raises a critical question: how much visibility should an organisation have into its agents in case one goes rogue and causes operational damage?
Robinson says enterprises must strengthen how they monitor AI agents and, as tools mature, develop the ability to intercept actions at the request layer. This means moving beyond traditional perimeter-based controls and towards real-time inspection of what an agent is being asked to do and how it responds. Zero trust access controls must extend to securing non-human identities with the same rigour applied to human users.
Rather than viewing failures as isolated events, organisations should use them to refine controls and improve visibility
“You need to define what anomalous looks like for agent behaviour in your environment, build detection rules for those patterns and require human-in-the-loop approval for high-risk actions such as account creation, permission changes and external data transfers,” he says.
He also urges security professionals to better understand the evolving threat landscape and to treat incidents as opportunities to strengthen systems. Rather than viewing failures as isolated events, organisations should use them to refine controls, improve visibility and reduce the likelihood of recurrence. As AI agents become embedded across productivity platforms, organisations must not only fix issues but ensure systems improve as a result.
It is also vital to have vigorous incident response protocols in place so that forensic investigations can be carried out effectively.
“Organisations should adopt new security architectures and see security not as a barrier but as a way to enable safe innovation within defined guardrails,” says Robinson. He also recommends running internal “AI prompt-athons” where teams can safely explore how agents behave, test edge cases and better understand potential risks in real-world scenarios.
There is no doubt enterprises are increasing investment in both AI and the systems to secure it – 90% have increased their AI security spending in the past 12 months. But confidence in the technology and understanding of its risks remain relatively low. This gap between adoption and preparedness is where many of the current vulnerabilities are emerging.
Robinson sees a significant need for organisations to bring together technology, people and processes to close the visibility gap and implement effective agentic AI frameworks, systems and guardrails. This includes extending activity-level monitoring, distinguishing between personal and corporate AI accounts and ensuring that machine-to-machine interactions are subject to the same scrutiny as human activity. Every AI deployment carries risk, whether organisations recognise it or not.
To find out how Netskope helps secure AI please visit netskope.com/ai
New research has revealed the extent to which the zero trust framework, developed to reduce risk across enterprises, is under pressure as AI adoption outpaces security governance. In 65% of organisations, zero trust controls cannot secure non-human identities (NHIs), including new agentic AI systems.
AI agents offer clear advantages, from generating content and retrieving information, to triggering downstream actions. However, they too often run unsupervised and without guardrails, increasing the risk of data leaks, credential compromise and wider operational disruption. The fact that they are operating with fewer checks than their human colleagues should ring alarm bells. According to Netskope’s AI Risk and Readiness Report 2026, which surveyed 1,253 cybersecurity professionals, 56% of enterprises acknowledge exposure to agentic AI risk. This is largely because AI tools operate autonomously in shadow mode, with organisations often only discovering what an agent has done after the action is complete.