
If the German Autobahn is the fastest road system in Europe, then AI is the technological equivalent, and we’re all in the fast lane. Experimental pilots are rapidly evolving into production-level deployments, delivering productivity gains and decision-making improvements. But when agents operate across regulated workflows, the benefits of AI go hand-in-hand with new compliance requirements.
Data from Netskope’s AI Risk and Readiness report shows that many firms are behind the curve when it comes to compliance. For example, while 73% of firms are actively deploying AI, only 7% have governance that enforces security and policy in real time.
This 66-point governance deficit suggests that many organisations are building their AI strategies on risky foundations: racing through pilots and attempting to retrofit governance at the point of wider deployment, for example. Given the pressure to keep pace with the competition, this ‘get ahead first, apply governance later’ attitude is understandable. But it’s not very sensible.
Even in highly regulated sectors such as finance, healthcare and the public sector, there is a strong desire to move quickly on AI adoption. According to Netskope’s AI index, adoption by both the healthcare and the financial services sectors sits at 83.5%, while the public sector has reached 76% adoption. Organisations operating in these sectors are, however, taking steps to mitigate many of the risks of AI.
“They are not holding back,” says Rich Beckett, product and solutions marketing director at Netskope. “We are seeing highly regulated organisations using a combination of enterprise tools and custom LLMs, hosted internally or in the cloud: models they can train themselves and put guardrails around, so they are trained on data formats that are specific to them, and also give a sense of greater security.”
Evolving regulation
These sectors face increased scrutiny under the EU AI Act’s rules. Designed to ensure the safe and ethical development of AI, the Act adopts a risk-based approach that bans systems with what it deems “unacceptable risks” while enforcing strict obligations regarding data governance, transparency, and human oversight on “high-risk” AI.
Agents can reason, they can adapt, they can chain together tools and data in ways that we just can’t
Although the European regulatory landscape is still evolving, the proposed (though currently stalled) Digital Omnibus package signals there will soon be greater integration across existing regulatory frameworks. Indeed, the AI Act already forms a layered governance model with the GDPR. Beckett likens it to a three-part Venn diagram: the AI Act on risk classification, GDPR very much on the data side, with robust security controls – including zero trust – sitting in the middle.
Once fully enforced, the financial penalties under the EU AI Act will exceed the formidable fines available under GDPR, reaching up to €35m or 7% of global turnover at the highest level. That should act as a warning to the 94% of organisations that lack full visibility into their AI activity. Indeed, just 6% can see the full scope of their AI pipeline.
This lack of visibility is in part due to the rapid expansion of AI capabilities, and deeper and more widespread integration with core systems and workflows. Organisations not only need to keep tabs on dedicated tools like Copilot, ChatGPT or Claude, for example, but increasingly AI agents, automations and integrations spanning multiple SaaS tools and functions.
“Essentially every SaaS app is now an AI app,” Beckett explains. “You’ve got business teams vibe coding AI agents, and MCP (Model Context Protocol) connecting them to organisational systems and data. So it’s not just the applications themselves, it’s these new forms of data traffic that you need to be able to see and secure.”
Static controls are not suitable for this new environment. As AI systems evolve, integrate with new data sources, and operate at machine speed, governance needs to shift to continuous monitoring and risk-based enforcement that responds dynamically to new threats and behaviours. In other words, organisations need real-time oversight of the entire AI pipeline, coupled with automated controls that allow for action the moment risks are spotted.
Beckett identifies several issues that pose a particular security threat, including authority drift, where agent permissions expand gradually beyond their original approval. There’s also the problem of misconfigured authority, whereby over-permissioned ‘experimental’ agent settings quietly become the default production mode, which could lead to the exposure of sensitive information when agents stray beyond their intended boundaries.
“Agents can reason, they can adapt, they can chain together tools and data in ways that we just can’t do ourselves, and which go well beyond what automation could do before,” Beckett explains.
Netskope’s research shows that 91% of organisations only discover what an agent did after it has already executed the action. In addition, 88% of organisations are unable to fully distinguish between personal and corporate AI usage. “Most organisations’ existing security stack can’t tell between an enterprise instance, like a GPT Enterprise plan, versus ChatGPT Free,” Beckett says. “They’re forced into this situation where they can either allow it all, or block everything, and in a lot of cases people block everything.”
Ensuring compliance
To safely operationalise AI in regulated workflows, organisations need continuous oversight across three layers: visibility into every application, MCP server, and agent in the environment; a governance layer that risk-scores AI tools and stress-tests custom models against jailbreaking and prompt injection; and runtime controls that prevent data leakage in real time.
This last piece demands a semantic-aware DLP solution, i.e. one that understands the meaning of data rather than just pattern-matching against it. “An AI or agent might summarise a sensitive document or project and anonymise it, so it could slip through traditional regex filters,” Beckett notes. Today, however, only 8% of organisations have controls in place that evaluate content semantically, regardless of how it has been rewritten.
Strong governance also requires clear accountability for AI. While the CISO/CIO partnership remains the most common day-to-day dynamic, dedicated Chief AI Officer roles are emerging in larger enterprises. These typically span responsibilities such as compliance, vendor risk, and cross-functional oversight.
AI leaders are also moving away from static policy documents and toward contextual coaching – automated messages delivered at the moment a user triggers a policy. “If you just say no to someone, they’ll probably find a way to do it anyway,” says Beckett. “Right now everyone is still in learning mode with AI, so if you can say no and give them a coaching message – some education around why this isn’t the right thing to do and the right alternative – and it lives in the moment rather than being a PDF that lands every six months, then you’ll see those good behaviours being built from the outset.”
Ultimately, proactive governance is not a barrier to innovation. By moving beyond reactive controls and embedding visibility and risk-based enforcement into the fabric of the AI pipeline, European organisations can say “yes” to AI usage while also being confident that their data, their reputation, and their regulatory compliance won’t be compromised.
To find out how Netskope helps secure AI please visit netskope.com/ai
If the German Autobahn is the fastest road system in Europe, then AI is the technological equivalent, and we're all in the fast lane. Experimental pilots are rapidly evolving into production-level deployments, delivering productivity gains and decision-making improvements. But when agents operate across regulated workflows, the benefits of AI go hand-in-hand with new compliance requirements.
Data from Netskope’s AI Risk and Readiness report shows that many firms are behind the curve when it comes to compliance. For example, while 73% of firms are actively deploying AI, only 7% have governance that enforces security and policy in real time.
This 66-point governance deficit suggests that many organisations are building their AI strategies on risky foundations: racing through pilots and attempting to retrofit governance at the point of wider deployment, for example. Given the pressure to keep pace with the competition, this ‘get ahead first, apply governance later’ attitude is understandable. But it’s not very sensible.