
There’s a problem at the heart of enterprise AI strategies, and it has nothing to do with models. For many organisations, the limiting factor is no longer the sophistication of the technology, but the quality and accessibility of the information it depends on.
“GPT-4, Claude, Gemini are all extraordinarily capable,” says Samantha Wessels, Director of EMEA at Box, an intelligent content management platform. “The bottleneck is content. Specifically, the inability of AI agents to reliably access, understand and act on the unstructured business content that defines how an organisation operates – its contracts, policies, customer records and institutional knowledge, which represents 90% of corporate data.”
Box’s State of AI in the Enterprise report shows that this bottleneck is undermining enterprise AI strategies. The vast majority of organisations – some 94% – now use AI in some form. Leadership belief in the technology is also high: 90% of companies plan to increase AI budgets over the next year, while 60% expect a total AI-driven transformation of their business within just two years. But this surface optimism is often accompanied by stalled pilots and a lack of trust in AI outputs. “A 94% adoption rate with minimal scale-up isn’t an AI problem,” Wessels argues. “It’s a content infrastructure problem.”
Some executives may not be aware of the extent of the problem. Indeed, AI operating without context still seems impressive at first glance. But, like a speech that plays well in the room but falls apart when the key arguments are unpicked later, it lacks the depth needed for operational deployment. “A general-purpose model knows a lot about the world,” says Wessels. “It knows almost nothing about your business – your pricing exceptions, your supplier relationships, your regulatory obligations, the nuanced way your legal team interprets a clause. That institutional knowledge lives in your content. If your AI can’t access it, you’re not getting enterprise-grade intelligence. You’re getting a very expensive search engine.”
In other words, AI’s apparent fluency can mask a more fundamental weakness: without reliable access to company-specific information, it is reasoning from an incomplete picture.
“Compound that with the reality of most content estates: data siloed across SharePoint, email, legacy ECM systems, and a dozen SaaS tools. Duplicate documents with conflicting version histories. Governance policies that exist on paper but aren’t enforced. Feed that chaos into an AI and you don’t get smart answers – you get confidently wrong ones. In a regulated industry, that’s not just embarrassing. It’s a liability multiplier,” she adds.
Addressing the legacy ECM issue
Legacy ECM systems are a particularly acute issue for many organisations. Wessels says that retrofitting them with AI is like “putting a jet engine on a horse-drawn carriage” and emphasises that they only cover a “fraction” of the demands of AI-powered operations. “In the AI era, content needs to be active – continuously accessible, contextually enriched, and ready to serve intelligent workflows,” she explains. “Real-time retrieval at scale, dynamic orchestration, fine-grained permissions enforceable at the agent level – none of this was in the design brief for systems built in the 2000s.”
AI’s apparent fluency can mask a more fundamental weakness: without reliable access to company-specific information, it is reasoning from an incomplete picture
Wessels believes the wave of bolt-on ‘AI features’ being rushed to market by legacy ECM providers is “the sound of an industry scrambling to stay relevant” and that “the honest question leaders need to ask is: are we going to keep investing in infrastructure designed for a world that no longer exists?”
When AI lacks access to the contextual content it needs to perform well, the chance of ‘confidently wrong’ results increases. “There’s a principle worth stating plainly: AI doesn’t scale where trust doesn’t scale,” says Wessels. “It manifests in different ways, such as discoverability, where AI can’t find the right content because it’s buried in an ungoverned file structure; permissions, where AI either can’t access what it needs or, more dangerously, can access things it shouldn’t; and trust, where employees stop believing the outputs because they’ve seen it hallucinate too many times.”
Achieving an AI-ready content layer
An AI-ready content layer enables both people and AI systems to access, interpret and act on information at speed – and crucially with trust. It should feature unified access (the ability to reach content wherever it lives without manual consolidation); enriched metadata, so that AI has the context and structure needed to reason intelligently (something many organisations lack); and clear provenance, so that users can trace outputs back to their source. Fine-grained permissions so that AI only accesses content authorised for a particular user, and robust governance to ensure content is accurate, current and compliant, are also essential attributes.
Once this AI-ready content layer is in place, organisations have the secure, well-governed single source of truth needed to safely deploy AI at scale. Box includes a range of features designed to help organisations reach this state, including tools like Box Extract, which can pull structured data from unstructured content so that AI can reason from it, drive decisions and trigger workflows. Hubs, meanwhile, act as a context container for AI agents. “The difference between giving AI a filing cabinet and giving it a well-organised, permission-aware briefing pack is the difference between an AI that retrieves and one that reasons,” Wessels explains.
These features empower true workflow automation, with AI routing a contract for approval, flagging a compliance issue, or triggering a downstream process. “That’s the shift from experimentation to operational deployment, and it’s the shift that delivers the ROI executives are looking for,” says Wessels.
Automated workflows that deliver results
Box’s research shows that productivity gains of up to 37% occur when AI moves from simply advising to automatically triggering workflows. Contract lifecycle management provides one of the clearest examples of how this works in practice. Reviewing a complex supplier contract traditionally involves a lawyer spending hours reading and cross-referencing terms.
“With AI operating on a well-governed content layer, that review happens in minutes,” says Wessels. “Extract pulls the structured data. Hubs provide the relevant precedent and policy context. Workflow Automation routes flagged issues for human judgement. The lawyer’s role shifts from extraction to judgement – that’s a structural change in how legal work gets done.”
In financial services, where resolving a complex customer query typically involves pulling account history, checking product terms, and verifying compliance requirements across multiple systems, AI agents with the right content can automate the retrieval, synthesis and drafting steps, compressing resolution times dramatically. The compounding effect of all these improvements is huge. “That’s where the 37% comes from,” Wessels explains, “not one big win, but hundreds of smaller ones built on a solid content foundation.”
Ultimately, AI success in 2026 isn’t about chasing different models, adopting more tools or announcing more investment, but rather having truly usable institutional knowledge. Wessels therefore advises leaders to stop buying more AI tools and start fixing their content. “The executives who will look back in three years and say ‘we got this right’ are the ones who resisted the pressure to keep layering AI tools on top of a broken content foundation, and instead made the harder, less glamorous investment in getting that foundation right.”
Without that groundwork, enterprise AI risks remaining a costly proof of concept: impressive in demos, unreliable in the flow of work. With it, organisations can give AI the context, controls and confidence it needs to move from isolated use cases to meaningful business transformation.
There’s a problem at the heart of enterprise AI strategies, and it has nothing to do with models. For many organisations, the limiting factor is no longer the sophistication of the technology, but the quality and accessibility of the information it depends on.
“GPT-4, Claude, Gemini are all extraordinarily capable,” says Samantha Wessels, Director of EMEA at Box, an intelligent content management platform. “The bottleneck is content. Specifically, the inability of AI agents to reliably access, understand and act on the unstructured business content that defines how an organisation operates – its contracts, policies, customer records and institutional knowledge, which represents 90% of corporate data.”
Box’s State of AI in the Enterprise report shows that this bottleneck is undermining enterprise AI strategies. The vast majority of organisations – some 94% – now use AI in some form. Leadership belief in the technology is also high: 90% of companies plan to increase AI budgets over the next year, while 60% expect a total AI-driven transformation of their business within just two years. But this surface optimism is often accompanied by stalled pilots and a lack of trust in AI outputs. “A 94% adoption rate with minimal scale-up isn’t an AI problem,” Wessels argues. “It’s a content infrastructure problem.”