
The artificial intelligence revolution, once just a futuristic promise, has rapidly evolved to become a reality. According to McKinsey & Company, 88% of organisations now use AI in at least one business function, up from 78% a year earlier.
But despite businesses’ desire to embrace AI, the gap between ambition and enterprise readiness is stark. Legacy systems, insufficient governance and crucially, messy or unreliable data can all hinder success, frequently leading to project failures.
For AI to be successful, businesses need to focus on building an intelligent data foundation, which will see data treated as a critical asset rather than an afterthought.
Establish upstream governance for ‘measured acceleration’
For AI systems to be dependable, organisations must incorporate governance and accountability measures into their data management processes from the outset.
By integrating governance early in the data lifecycle and systematically embedding it within the processes of data design, approval and evaluation, organisations can proactively manage risks, reducing potential issues in the future.
Dan Broadhurst, commercial director at Finova, says: “Too many banks are investing millions into AI models without first securing their most important asset: their data. If an AI model can only draw from cluttered folders and unstructured data, the result – to be blunt – is a total mess.
“When data is clean, standardised and clearly owned, teams can deploy models without constantly second-guessing the inputs or revalidating the outputs.”
Rather than hamper innovation, upstream governance supports responsible and measured AI acceleration, say experts.
Toby Thomas, director of research and corporate intelligence at S-RM, says: “Upstream governance is what stops AI becoming a risk multiplier rather than a productivity tool. Clear governance upfront sets boundaries on where AI can be used, what level of reliance is appropriate and who is accountable.
“That clarity actually accelerates adoption because teams know what is permitted, rather than experimenting in silos or hesitating due to uncertainty.”
Choose platforms that enable deep integration
Many large organisations depend on multiple applications that don’t talk to each other, forcing employees to manually transfer data.
For effective AI acceleration, business leaders should select platforms where governance, analytics and AI integrate smoothly to enable seamless data exchange across systems. Furthermore, as businesses expand, platforms with strong integration features will allow them to add new applications without needing to change their infrastructure.
“Platform choice shouldn’t be driven by novelty or individual teams experimenting in parallel,” says Thomas. “From a practical perspective, firms should prioritise platforms that integrate cleanly with existing systems and data flows, rather than creating standalone AI tools that don’t talk to each other.”
As a first step, Chris Palethorpe, client partner at 4most, recommends organisations define their business requirements in a clear and prescriptive way, enabling IT teams to select the right solution.
“This often involves balancing competing priorities across business units. Understanding where requirements are fixed and where there is flexibility is essential to making effective decisions.”
Implement semantic layers to align definitions across systems
The concept of semantic layers is not new, but it is fast evolving from a ‘nice-to-have’ to a critical component of data infrastructure.
As AI becomes more embedded in business workflows, it brings with it a new kind of risk: lack of context and consistency. At best this can lead to inaccuracies; at worst, costly outcomes.
Joseph Forooghian, UK head of data strategy and advisory at Capco, explains: “The biggest blocker to AI value isn’t the tools or the models, it’s the lack of context. Without context, data is easily misused or misunderstood.”
While humans possess critical knowledge and can interpret or fill gaps, AI models lack such institutional memory. That is where semantic layers become crucial: they clearly define what data means, how entities relate, and how terms are used in ways machines can comprehend.
Define reusable data products to reduce revalidation
Traditional data pipelines often create bottlenecks, making data products increasingly valuable. A data product goes beyond a simple data set; it is structured for use across multiple use cases and applications without re-engineering.
“The key is treating data less as a one-off input and more as a governed asset,” explains Thomas. “Reusable data products need consistent definitions, clear ownership and documented controls around how the data was created, validated and updated to be successful.
“When that groundwork is in place, teams don’t need to repeatedly re-check the same data every time it’s used. Instead, they can rely on agreed standards and lineage.”
Organisations that invest in well-defined data products will find it easier to scale AI, note experts.
Brad Novak, chief technology officer at Rathbones, says: “The highest return in AI for Rathbones isn’t the newest model, it’s the foundations. Reusable data products cut revalidation and maximise the value extracted from our AI models.”
For Palethorpe, effective data products should encompass both today’s critical processes and the data requirements needed to support future business growth.
“If data products are too narrow, business units will source data elsewhere, creating duplication and reducing control. If they are too broad, data quality and trust can become harder to maintain.
“Firms also need an operating model that allows products to evolve in line with business needs, ensuring they remain reusable and effective over time.”
Rationalise vendors, embed trust and track outcomes
Fewer platforms mean fewer opportunities for governance and accountability to break down. Going forward, organisations will need to prioritise deep integration over managing multiple vendors.
Trustworthy data is also vital; unreliable data can undermine AI projects and the risks are heightened with agentic AI, where poor data can lead to harmful outcomes.
Trust should be integrated throughout each stage of the data lifecycle, from initial collection to governance, security, and decision-making. By promoting accountability, organisations help ensure employees, stakeholders, and customers have confidence that their AI systems rely on reliable and accurate data.
Forooghian says: “Trust does not stop at the data. Stakeholders also need to understand how and why a model arrived at its output. When AI is embedded into operational processes, explainability becomes essential. Without clear traceability of how and why decisions are made by AI models, trust cannot scale, and without this, neither can outcomes.”
However, Thomas emphasises that embedding trust in AI takes time and intention.
“Firms should expect a short-term trust gap and plan for it, rather than assuming AI will deliver instant returns,” he explains, “Tracking outcomes also requires more than headline productivity metrics. Many benefits show up in small, previously unmeasured tasks so organisations need to be deliberate about how they assess value and avoid incentives that encourage over-use.”
AI success will not be determined by the sophistication of models alone, but by the strength of the data foundations that support them. Organisations that prioritise governance, integration, context and trust will be best placed to turn AI ambition into measurable, sustainable value.
For more information please visit www.quest.com
The artificial intelligence revolution, once just a futuristic promise, has rapidly evolved to become a reality. According to McKinsey & Company, 88% of organisations now use AI in at least one business function, up from 78% a year earlier.
But despite businesses’ desire to embrace AI, the gap between ambition and enterprise readiness is stark. Legacy systems, insufficient governance and crucially, messy or unreliable data can all hinder success, frequently leading to project failures.
For AI to be successful, businesses need to focus on building an intelligent data foundation, which will see data treated as a critical asset rather than an afterthought.