![[ed] Sr Illo P12](https://assets.raconteur.net/uploads/2025/10/ED_SR_ILLO_p12-900x506.jpg)
Only a small fraction of organisations are capturing value from AI, most are not. Many of us will have seen the now-famous MIT report exposing the “gap” between AI success and failure, which claims that 95% of corporate AI initiatives show zero return. Calling it a gap might be too kind, it’s starting to look more like a cavern.
And the MIT findings aren’t an outlier – the evidence is stacking up. A report by ServiceNow, a software company, revealed the average enterprise AI maturity score dropped nine points from 44 to 35 over the past 12 months, suggesting many are struggling to keep up with the speed of innovation.
Enterprise has been promised a lot from AI and its revolutionary, transformative and game-changing potential. And AI can be all of those things – there’s a reason why tens of billions has been poured into enterprise generative AI investment.
But that potential will only ever be reached if organisations can move beyond the cycle of pilots and into strategic enterprise AI implementation programmes with measurable value. And getting there starts with a sharp focus on AI maturity.
As we move into 2026, several defining factors will separate the AI leaders – the few poised to capture and benefit from real value – from the laggards – the majority who are pumping in significant investment with little to show in return.
Approaching AI in five distinct ‘lanes’
Too often, businesses approach AI as if it’s a single, all-in-one solution that can serve the entire enterprise. In reality, AI arrives in five distinct ‘lanes’, each demanding different operating models and tooling: productivity tools, enterprise functions – such as finance and HR – IT operations, engineering productivity and AI-driven innovation.
The leaders in AI not only understand these distinctions, but know that the only way to scale across these five lanes is to take a holistic view through proper enterprise architecture-governing, integrating and funding them coherently, rather than treating them as a monolith
Using data as the fuel for scale
Enterprise-wide impact often stalls because the data required to power and scale AI simply isn’t available in the right quality, timeliness or accessibility. Those taking advantage of what AI has to offer are treating data as a critical fuel to their operation – standardised, governed, contract-backed and available through reliable pipelines.
Building governance in at every level
The EU AI Act is no longer a distant concept – it’s very real, very enforceable and, now, time-bound. General-purpose model duties started in 2025, with most requirements coming into force by August 2026. The AI leaders aren’t waiting to implement; they’ve already built governance, documentation, and model control to meet these deadlines.
They’re also anchoring their governance frameworks to ISO/IEC 42001, the world’s first international standard for an Artificial Intelligence Management System (AIMS). It provides an auditable framework for responsible AI operations and saves organisations from reinventing the wheel.
Meanwhile, the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) – alongside its GenAI profile – offers practical guidance on risk controls, testing and red-teaming approaches that forward-thinking CISOs already know well.
The message here is a simple one: laggards don’t just risk falling short on impact, they risk falling short of the law.
Integrating security and privacy as default
Leaders bake security into every stage of the AI lifecycle. They understand that it’s a non-negotiable. Laggards, on the other hand, treat it as an afterthought. This is a dangerous move given the financial, reputational and legal fallout a single breach can cause.
The good news? Most executives say they’re prioritising security in their GenAI budgets. 67% of business leaders surveyed by KPMG said they plan to invest in cybersecurity and data protections for their AI models. I’d urge the 23% who aren’t to rethink that risky bet.
Basing decisions on evidence, not intuition
Like it or not, the unit economics of AI have to add up. The leaders know this – they base their decisions on evidence, not intuition – and treat every AI use case like a product, not a project, with a clear profit and loss (P&L) focus built around three primary levers:
- Cost-to-serve – tokens, inference retrieval
- Quality – success and containment rates
- Latency – service level objectives (SLOs)
“Demo delight” won’t translate into results. To move the P&L, smart engineering choices have to be made, from prompt and embedded caching to using smaller models for bounded tasks and enforcing strict timeouts with solid fallback strategies.
Distinguishing between the flashy and the fundamental
Many companies chase the big, flashy AI projects while overlooking smaller, high-impact opportunities. AI leaders can distinguish the two: Small Language Models (SLMs) are often more than enough for bounded tasks and agents – they’re cheaper, faster, easier to deploy privately and are simpler to govern – whereas frontier models should be reserved for complex, open-ended reasoning.
Using the right approach, where it fits best
One of the clearest markers of AI maturity will be how well organisations choose and manage their model strategy. Many businesses are still debating whether to use Retrieval-Augmented Generation (RAG) or fine-tuning – I’m here to remind you that guesswork won’t get you there.
The right choice depends on how quickly your knowledge changes and how much risk you can tolerate:
- Use RAG when information changes frequently or when occasional inaccuracies are acceptable
- Use fine-tuning when the task is stable, accuracy is critical or you need the model to adopt a consistent tone or reasoning style
The leaders in 2026 won’t be choosing sides. They’ll use both approaches – applying each where it fits best and sometimes combining them as part of a single strategy: fine-tuning for domain-specific conversations and using RAG for information retrieval.
At the end of the day, the real differentiator isn’t which model you choose, but whether your data pipelines and governance can support that flexibility.
Integrating people into the process
AI-native organisations – where AI is seamlessly integrated at every level – make people a core part of the process, turning user feedback into weekly product improvements. Architecturally, they are vendor-agnostic and composable, enabling the best model – proprietary, open, small or retrieval-heavy – to fit each task.
Leaders know that domain squads embed AI into the workflow itself, not alongside it. They establish an AI product council to own the backlog of unit metrics for each lane and an AI operations team to manage evaluations, rollouts, guardrails, drift and budgets.
Laggards, by contrast, lack this type of structured ownership and coordination.
Training employees, at scale
One of the clearest distinctions between those running pilot theatre and those driving real impact is how much they’re willing to invest in training. Those getting on the front foot with AI are the ones upskilling at scale – we’re talking more than half of employees with role-specific AI skills – rather than relying on a handful of “AI champions” to carry the load.
Are you an AI leader or a laggard? 5 questions to test your maturity
Organisations wanting to test their AI maturity ahead of the new year should ask themselves five simple questions:
Do you have cost, quality and latency targets per use case and are you meeting them consistently?
Can you roll back any model, prompt or retrieval change in one click?
Are at least 30% of employees using role-specific AI workflows you can measure?
Can you switch models without re-configuring pipelines?
Are you mapping controls to ISO 42001/NIST and tracking EU AI Act deadlines?
If your answer isn’t “yes” to at least four of these questions, you’re falling short of being truly “AI-native.”
In 2026, AI success will not be determined by hype or scale. Instead, it will be defined by evidence, governance, unit economics and organisational design. After all, AI without consistent principles and implementation is just experimentation and experimentation alone will leave you with nothing more than a pile of pilots.
Hesitation comes at a high price – don’t risk being left in the past.
Daniel Stangu is the senior vice-president and head of the digital solutions office, Intellias, a software engineering and digital consulting company.
Only a small fraction of organisations are capturing value from AI, most are not. Many of us will have seen the now-famous MIT report exposing the “gap” between AI success and failure, which claims that 95% of corporate AI initiatives show zero return. Calling it a gap might be too kind, it’s starting to look more like a cavern.
And the MIT findings aren’t an outlier – the evidence is stacking up. A report by ServiceNow, a software company, revealed the average enterprise AI maturity score dropped nine points from 44 to 35 over the past 12 months, suggesting many are struggling to keep up with the speed of innovation.
Enterprise has been promised a lot from AI and its revolutionary, transformative and game-changing potential. And AI can be all of those things – there’s a reason why tens of billions has been poured into enterprise generative AI investment.




