From AI curiosity to AI accountability: why scale is an operating model problem
Despite heavy investment in AI pilots, most projects fail to scale. This is forcing organisations to rethink their operating models, break down silos, and embed governance, integration and measurable outcomes to turn early experimentation into lasting business value.
Organisations are heavily investing in AI pilot projects. Scaling those projects is proving more difficult.
An MIT report last year found that 95% of generative AI projects fail at the pilot stage. To prevent projects from stalling at the starting line, organisations need to sharpen their AI operating model. This means ensuring projects have measurable outcomes, governed decisions and repeatable delivery.
One of the challenges businesses often face is that experimentation efforts typically focus on discrete use cases trialled by a few individuals or by a specific business department. This frequently results in a sprawl of siloed tools that makes it difficult for organisations to scale effectively.
“As organisations mature, they should be thinking much more cross-functionally about what AI can do,” says Linh Lam, CIO at Jamf, a device security platform. “If you continue to think in a siloed manner, that’s where you start to see the adoption drop off or you start to see the value really diminish over time. And that’s why it often gets stopped in its tracks during the pilot and experimentation phase.”
Another reason why organisations sometimes struggle to move beyond pilot projects is because those pilots often involve systems that are not heavily integrated, meaning they are easier to manage.
“You probably get quite good results in a pilot, but when you want to scale, you actually then have to go and work on some of those legacy or older systems, and that brings with it a whole set of different challenges,” says Rich Davies, UK managing partner at Netcompany, an IT consultancy.
By trying to integrate the technology more broadly, AI often reveals weaknesses in legacy systems and may raise issues that were not apparent during the pilot phase.
“What we see is that a lot of people can claim victory during a pilot, because you’re working on systems that aren’t right at the core,” says Davies. “If you want to scale, you have to get stuck into those systems that are older, and in many cases, are built on different tech. A lot of pilots stall because they’re trying to connect with systems that were never designed for it.”
To ensure AI projects don’t fail at the pilot stage and businesses can advance to an environment of consistent platforms, governed automation and sovereign-ready choice, they need to adopt a framework that can support AI rollout at scale.
The first step in this process is to determine what the use cases are and ensure AI is appropriate for those tasks, says Dr Adnan Masood, chief AI architect at UST, a digital transformation services business.
This means ensuring there is a legitimate business case for adopting AI and a demonstrable return on investment.
Having the right governance strategy is also a key element of any plan to scale AI safely.
“This is not about heavy-handed governance that then stalls innovation, but governance that really sets up guardrails for the organisation to move forward,” says Lam. “We review all of the technology that comes in from a security standpoint, from an IT standpoint, and from a data privacy standpoint, just to make sure that it’s safe.”
Lam adds that AI governance is not just about policing, it is also about effective AI enablement across the business.
“Employees may come to us and say, I saw this amazing tool that I’m interested in, so our governance team is also made aware, asking what problems are you trying to solve?” says Lam. “We may actually already have something in our AI toolkit today that can do the same thing.”
As organisations advance their AI programmes, they must identify the relevant KPIs for their business to measure outcomes and ensure their rollout efforts are a success.
“Some of this just comes down to good project management,” says Jon France, chief information security officer at ISC2, a cybersecurity association. “That means to set the parameters and set the goals and to understand what success looks like. So the tighter you can scope a use case, the better you’re going to be at evaluating it.”
Finally, to ensure AI platforms can be rolled out consistently, organisations need to make scale-out processes repeatable so they don’t have to treat each project as an entirely new endeavour.
“It comes down to breaking it into chunks of functionality that you want to deliver, and then you can measure, manage and create a good feedback loop,” says France. “You need to make sure you’ve got the right skill-sets and knowledge in the room. That’s not just security or technology, that’s business as well, and stakeholder management. Good governance is the key thing.”
Scaling AI with governance and control
Moving AI into production is no longer a technical challenge, it’s an operational one. In turn, organisations need a unified platform to build, run and govern AI at scale across hybrid environments, with automation playing a critical role in turning insight into action.
For many organisations, the challenge with AI is no longer building models, it’s running them. The gap between experimentation and production has become the defining problem of enterprise AI. Not because the technology falls short, but because operating AI systems at scale introduces new demands around control, governance and accountability.
In most organisations, AI development has evolved in silos. Different teams use different tools, infrastructure and processes, which leads to fragmented environments and inconsistent controls once systems move into production. The result is duplicated effort, inefficient use of compute and limited visibility into how AI behaves in live environments.
At the same time, the nature of AI is changing with a clear shift underway from models to systems. Generative AI and agentic approaches are turning AI into something that interacts with data, applications and workflows, and, increasingly, takes autonomous actions.
That shift raises practical questions for IT leaders: where models run, how they are managed, how access is controlled and how behaviour is governed over time. These are not model questions; they are operational ones.
Addressing this requires more than adding tools. It requires a consistent operating model for AI that brings together how systems are built, deployed and governed across on-premises, cloud and edge environments.
Red Hat positions its AI portfolio around this need, combining Red Hat AI with Ansible Automation Platform to support how models are deployed, managed and controlled across hybrid environments.
In practice, this means Red Hat acts as a common layer for standardising how models are served, how GPU resources are allocated and utilised, and how access and policies are enforced. It also means managing the full lifecycle of AI systems, from development through to deployment and ongoing operation, within a consistent framework.
“One of the biggest challenges we see is fragmented AI infrastructure, where teams deploy models and consume GPUs in isolation,” says Martin Isaksson, an expert within Red Hat’s AI business unit. “What organisations need instead is a shared, governed platform for AI, where compute, models and access are centrally managed. That’s how you achieve both efficiency and control at scale.”
Fragmentation is one of the main constraints on scaling AI. Without a shared approach, teams optimise locally rather than across the organisation, leading to poor utilisation and inconsistent governance. A unified platform helps address this by creating common standards for how AI workloads are run and controlled.
As AI systems become more capable, governance becomes the factor that determines whether they can scale. Without clear controls over how models are deployed, accessed and behave in production, organisations struggle to move beyond isolated use cases.
The rise of agentic AI makes this more urgent. Systems are no longer just generating outputs; they are taking actions. In production environments, that introduces new risks around unintended behaviour, compliance exposure and ungoverned change.
Isaksson says guardrails are not just about the data a model sees, but about how AI behaves in production. As systems become more agentic, organisations need runtime controls to prevent issues like prompt injection, data leakage or unintended actions. That requires enforceable policies, not just guidelines.
These challenges are particularly acute in regulated sectors and in environments handling sensitive data. As interest in sovereign AI and sovereign cloud grows, organisations must ensure that data, models and AI-driven processes remain secure, compliant and under full control.
Governance therefore spans multiple layers: model lifecycle management, access control, policy enforcement, monitoring and auditability. Without this, scaling AI becomes difficult to manage and harder to trust.
AI can generate insight or trigger decisions, but those decisions still need to be executed across systems. This is where automation plays a role by providing a controlled way to apply changes consistently across environments.
Observability and AIOps platforms have become increasingly sophisticated at detecting anomalies and predicting failures, but detection without action delivers diminishing returns.
The persistent challenge for most organisations isn’t visibility. It’s closing the gap between identifying a problem and resolving it at speed and scale.
When observability is coupled with a governed automation platform, organisations can translate AI-driven insights directly into automated remediation, reducing mean time to resolution, enforcing consistency across environments, and lowering operational risk without adding headcount.
Ansible Automation Platform provides a shared execution layer. Teams can reuse approved workflows, apply controls consistently and maintain visibility over automated actions across environments.
“The main challenges around AI adoption are less about the technology itself and more about execution, particularly how to move from pilots to scalable, production-ready operations,” says Belkacem Moussouni, head of automation within the Technology Sales at Red Hat. “Many organisations struggle with fragmented processes, unclear ownership and a lack of standardised operating models. Consistency comes when you introduce a coordinated automation approach that connects teams, enforces governance and enables AI to scale reliably across the enterprise.”
Automation on its own does not resolve the broader challenges of AI in production. It works best when it is part of a wider approach that standardises how systems are built, governed and integrated.
Enterprises also need clear policies embedded into their platforms. With Ansible Automation Platform, organisations can define role-based access, enforce governance rules and integrate with existing security and compliance systems, ensuring automation is both scalable and auditable.
Going hand-in-hand with this concept must be a strong focus on enforceable governance, especially as AI models can become unsafe due to prompt injections and other security risks.
“Without enforceable governance, automation can quickly become fragmented,” says Moussouni. “Different teams implement their own approaches, creating inconsistencies, increasing operational risk and preventing organisations from scaling automation and AI effectively.”
To support this shift, many organisations are formalising how AI and automation are adopted, often through centres of excellence. These structures help define standards, coordinate teams and align initiatives with business priorities.
“Red Hat helps organisations to address the skills gap thanks to a unified automation platform approach,” says Moussouni. “Different teams can consume various services through the platform, including AI capabilities, without requiring deep expertise in every domain.”
The challenge is no longer whether organisations can build AI. It is whether they can run it — consistently, securely and at scale.
“There’s a growing gap between experimenting with AI and running it safely in production, especially with agentic systems. The challenge is not just building models, but governing how they operate, access data and take actions over time,” Isaksson says.
Bridging that gap requires treating AI as an operational system, not a set of experiments, and putting the structures in place to control how it behaves over time.
The AI accountability dashboard
Red Hat data suggests AI ambitions are outpacing execution, as leaders confront barriers, shifting toward hybrid, cost control and stronger governance.
Sovereign-ready by design: the governance patterns that make AI scalable in 2026
As AI adoption accelerates, organisations are prioritising sovereignty and governance to maintain control over data, meet regulatory demands, and manage geopolitical risk - ensuring AI can be deployed safely and at scale without compromising security or compliance.
With AI increasingly embedded into how businesses operate, the concept of sovereignty as a design is becoming an ever more important element of AI model choice for organisations to consider, particularly for those in regulated industries or those that have a global footprint.
“This is all about how much control you should have versus how much control the vendors have, and what they do with your data as well,” says Linh Lam, CIO at Jamf, a device security platform. “We’re starting to see more organisations take a look at how they run AI locally so they can have full control over the models that they put in place.”
Tighter regulations around data residency in certain jurisdictions are also forcing organisations to think more carefully about AI sovereignty and how they manage their data.
“If you’re working in Europe, the EU AI act gets applied,” says Adnan Masood, chief AI architect at UST, a digital transformation services business. “That means GDPR also gets applied. And when GDPR gets applied, that means that your provenance and your geographical location for that has to be located in Europe. You can’t just transfer the customer data over to the United States or other countries. It has to comply with local laws by design.”
Organisations also need to think about the origin of the AI model too and whether there are any potential geopolitical sensitivities involved in adopting a tool that was developed in a different country.
“That’s a decision businesses are going to have to make on their own, but they should be aware of the provenance of where that model and that supply chain comes from,” says Jon France, chief information security officer at ISC2, a cybersecurity association. “Good third-party risk management abounds here.”
To that end, there are several practical governance steps organisations need to take when scaling out sovereign-ready AI. First is making a use-case registry and then assigning appropriate risk tiers to those tasks. Organisations also need to put in place policies and guardrails that outline allowable use and escalation routes for policy violation. And there also needs to be clear guidelines around monitoring, evaluation and auditing to track decision-making.
“These are all the model governance elements businesses need to think about,” says Masood.
Data security is also a critical component of scaling AI safely, particularly if any personally identifiable information is being used in those AI systems.
“There are some basic security standards and integration standards that we always have in place, but after that, it depends on what type of data the AI tool is going to access and what it will actually integrate with, and that will drive the next level of scrutiny and control that we take a look at,” says Lam.
While AI brings new opportunities, the principles of AI governance are not dissimilar to governance needs for any other technology product.
“It’s less about reinventing the wheel and more about using the wheels that probably exist in a governance way already,” says France.
Organisations should also consider applying a “RACI” matrix when scaling AI projects, outlining who is responsible, accountable, consulted and informed, ensuring there is clear delineation of ownership and accountability, adds France.
Good governance also means ensuring that scaling AI across a business is not a one-and-done exercise.
“Our governance councils aren’t just the gatekeepers for entry and then they don’t care what you do after that,” says Lam. “It has to be ongoing, but it also has to recognise that every group is on a different trajectory right now in terms of their AI maturity and adoption.”
Given the potential commercial implications of any roll-out missteps, choosing AI tools that are sovereign by design and underpinned by robust governance standards are key if organisations want to make AI adoption safe at scale, every time.
“We are only as good as our security, and it only takes one thing to just completely unravel all of the hard work or great reputation we may have with our customers and partners out there, and we do not want it to be because of something that we could have controlled, or we should have reviewed,” says Lam.
All of this takes on extra importance amid the current geopolitical climate.
“It’s incredibly important, and probably never more so than now the world we live in today,” says Rich Davies, UK managing partner at Netcompany, an IT consultancy. “If you take the starting point that AI systems have a huge impact, not just on us as citizens but they have impacts on us as a nation, then those security and sovereignty concerns need to be well understood at the highest levels.”
Organisations are also at risk if they don’t fully understand the AI model they are using or have no control over where their data may end up.
“That might be fine in a proof of concept where if it goes badly wrong, it doesn’t really matter, but that’s not the same as doing it with your core systems,” says Davies.
By including sovereignty and governance into their AI programmes, organisations can make AI model choice both safe and scalable.