AI leadership is less about adopting the latest tools and more about building the organisational foundations that allow innovation to scale safely. The firms pulling ahead are those that treat trusted data, clear accountability and executive alignment as strategic capabilities rather than technical afterthoughts. In an era of tightening regulation and rising expectations around AI governance, these qualities increasingly separate experimentation from real transformation.
Commercial Feature
Scaling intelligence: the role of trusted data platforms in financial services
As AI moves from experimentation to production in FS&I, the primary constraint is no longer technology – it’s trusted data
AI works, but does it scale? Despite significant investment in pilot projects, many FS&I firms are still struggling to shift from pilots to production mode. In fact, a 2025 MIT report found that 95% of corporate generative AI pilots fail to deliver measurable ROI or move beyond the testing stage. As a result, true transformation of operations and processes never materialises.
One of the biggest bottlenecks between experimentation and scale is a lack of trust in the data being fed into AI. “Data trust is the dominant constraint when it comes to experimenting and running AI pilots, but also in terms of leveraging and operationalising it,” says Glenda O’Keefe, field CTO at Quest Software.
In many respects, this lack of trust is understandable. Data issues can cause AI to produce false or biased results, potentially creating regulatory risks and reputational damage. Problems don’t just stem from data quality either: a lack of insight into freshness, fitness for purpose, explainability and lineage can also have serious repercussions for the business.
Lineage goes hand‑in‑hand with data quality because it provides the transparency needed to assess and maintain quality. Without understanding the full journey of the data — where it originated, how it was transformed, you need visibility into the full life cycle from source to decision.
“Regulatory compliance is also raising the bar, especially with the EU AI Act. Organisations now need clear visibility into what data they hold, where it resides, and who is accessing or using it. The level of internal accountability required today is significantly higher than it has ever been.”
Today, however, it’s often unclear who ultimately owns certain data sets and is responsible for their quality and lineage. Lacking this knowledge, business users struggle to determine whether the data is safe to use, while outdated or unclear governance models only add to the problem. At the executive level, this fragmentation undermines confidence in analytics and makes decision-making harder.
The risks of mistakenly feeding AI models ‘bad’ data have further increased with the introduction of the EU AI Act, which requires firms deploying high-risk AI systems to comply with strict obligations around transparency, documentation, risk mitigation, and human oversight. Non-compliance can result in penalties of up to 7% of global annual turnover.
Tackling fragmentation
Meeting these requirements is particularly challenging for firms operating in fragmented data environments built on multiple point solutions. “It creates issues not just in the back end, but also at the executive level. Leaders are trying to make decisions based on analytics derived from that data,” O’Keefe says.
“Teams spend huge amounts of time repeatedly gathering and cleaning the same data for different use cases, slowing delivery and creating inconsistent results. At the executive level, this fragmentation undermines confidence in analytics and makes decision‑making significantly harder,” she adds.
Bolt-on governance can’t account for all these issues. The solution is rather integrated governance that covers the entire lifecycle of data products, pipelines and models, which enables measured acceleration by balancing speed with risk. Rather than viewing governance as a compliance burden or policing function, leading FS&I firms therefore position it at the heart of their AI strategy, and empower collaboration across engineering, back office, technical and business teams to ensure it remains true to evolving business strategies.
Quest’s Trusted Data Management Platform is an integral part of achieving governance that can help directly connect data to the desired business outcomes of an organisation. It provides the tools needed to bring visibility, quality and context to data and embed the trust needed to scale AI. It’s able to bridge the gap between technical IT processes and business needs by making data products and AI models easy to discover, evaluate and use. It also provides full insight into what data exists across the organisation, what it means in a business context, and whether data and models have drifted beyond quality thresholds.
The data marketplace
Crucially, the platform’s data marketplace function enables business users to find, compare, and request access to the data products they need to develop high-quality, trusted solutions. This marketplace for data is built around a consumer-like experience, offering quick insight into how fresh data is, user comments, star ratings, automated trust scores and usage frequency.
“We’ve brought the e-commerce concept into our marketplace, which makes it easy to ensure that everyone has access to good, governed data products,” O’Keefe explains.
The platform also links technical metadata to a business glossary so that everyone is using the same definitions for key business terms. As trust depends on knowing where data came from, it automates data lineage too, showing the journey of data from its source and through every transformation to its final destination. AI/ML-powered anomaly detection also monitors data in real-time. If a data pipeline delivers data that is ‘off’, stakeholders are immediately alerted so that the problem can be addressed before it does critical harm.
Data that is properly certified and easy to discover and evaluate can be used many times across different models and use cases. These reusable data assets can help to accelerate projects and scale them faster, ultimately enabling FS&I firms to innovate at speed without introducing unnecessary risk or duplicated work. In fact, Quest claims that a unified platform approach can deliver trusted data products up to 54% faster than traditional approaches.
The company’s recently released Automated Data Product Factory promises to further enhance the way data is handled. “It is the industry’s only AI-driven capability designed to transform how organisations define, design, and operationalise data products. “One of the biggest barriers to scaling data products is complexity,” says O’Keefe. “Traditionally, if a business user wants a new data product, they have to understand modelling tools, write detailed technical specifications, and work through multiple handoffs with engineering. That slows everything down.”
“With the Automated Data Product Factory, we remove that friction. Business users don’t need to learn modelling tools or draft technical documents. Both business and technical teams can effectively have a conversation with the data in natural language by framing the use case they’re trying to solve with AI,” O’Keefe says. “You just type in your use case or business challenge, and the AI will search across your existing curated data assets and provides a production-ready data product in a fraction of the time.”
Trusted data is now the key to AI success
The platform’s AI interprets that intent and automatically constructs the logical data product model. Rather than starting from scratch each time, it prioritises existing enterprise and industry-standard models before generating anything new. This prevents the creation of redundant, one-off data products and reinforces consistency across the organisation.
“The result is not just faster delivery, it’s standardisation at scale, reusability, reduced duplication and a data product ecosystem that stays aligned with enterprise architecture,” she adds.
As AI adoption accelerates, a trusted data foundation will ultimately play an ever bigger role in how well FS&I firms are able to scale the technology. Organisations that pull together AI into a single platform and pursue a path of reusable data products are likely to save countless hours, enabling teams to handle more projects with the same staff and scale output without significant cost increases. In short, trusted data is now the key to AI success in 2026.
What defines an AI leader?
Executive-level support
Organisations that successfully scale AI have several common traits, including executive alignment and engagement with AI strategy, and cross-functional collaboration between technology and business teams. “The C-level must be actively involved when you’re talking about scaling and operationalising AI,” says O’Keefe. “Without executive sponsorship, initiatives stall before they deliver real impact.”
Well-defined outcomes
AI pilots that scale successfully also have strongly defined business outcomes. “You need to understand what the final outcome is and then work backwards from that,” O’Keefe advises. In practice, this means that rather than beginning with technology choices, AI leaders first identify the business problem they want to solve, define what success looks like, and build a roadmap that will get them there.
Identifiable data champions
Identifying and empowering data champions is equally important for scaling AI. These individuals can help to bridge technical and business perspectives, making them invaluable for driving AI adoption and building trust. “You need to know who your data champions are, as they play a vital role in change management and adoption,” O’Keefe notes.
Trusted data
AI leaders also excel at making data products discoverable and easy to use, while maintaining proactive monitoring of data and models. They store products in a central catalog with clear descriptions, ownership, lineage, and quality metrics. They also embed governance into every stage of the AI lifecycle, from design to deployment.
EXPAND
CLOSE
How data management can close the AI efficiency gap
FS&I organisations are juggling multiple data management solutions, creating inefficiencies, duplicated effort and slower AI adoption. Here, we highlight the scale of tool sprawl, the operational cost of low data trust, and how converged data platforms are helping firms improve efficiency, governance and AI readiness.
Lessons in mastering data complexity for scalable AI
With most generative AI projects failing, financial services leaders argue that trusted, well-governed and reusable data foundations, not advanced models alone, determine success
In an increasingly technology-driven world, business leaders face growing pressure to deliver on the promise of artificial intelligence. Yet, many initiatives stall, with businesses struggling to move from complex and siloed data environments to trusted, scalable AI systems that deliver real impact.
We know that a staggering 95% of corporate generative AI projects fail to deliver, and for many the problem isn’t technology, but data. MIT points out that only a tiny fraction of enterprise data is AI-ready. Not only is this slowing businesses’ ability to keep pace with technological developments, but it spells trouble for their balance sheet too, with Deloitte estimating that 80% of companies suffer income loss due to poor data quality.
What lessons can businesses learn when it comes to mastering data complexity? Here, three experts from the financial services and insurance sector offer their insight and experience.
Robin Gordon
Chief data officer at Hippo
As chief data officer, one of the most important lessons I’ve learned is that AI scale-up doesn’t start with models, it starts with data readiness.
To be truly AI-enabled, companies must ensure their data is reliable and well-governed. Governance is shifting from manual, document-heavy processes to automated, embedded controls, effectively treating governance as code. As innovation accelerates, governance, compliance and model monitoring are increasingly built into the technology itself rather than managed separately.
Simplifying complex data is also key as companies look to make AI scalable. Simplification begins with strong classification and shared definitions. Clearly identifying what data exists, where it resides, and how it should be used eliminates ambiguity. Structuring information into governed data products reduces reliance on competing “sources of truth,” creating a trusted foundation for both people and AI systems.
Similarly, trusted, well-documented data products allow teams to confidently leverage data for analytics or AI initiatives without creating foundational work. Reusable data products modularise and organise data to enable greater self-service across the organisation. When properly contextualised, with clear definitions, common language, and quality standards, they expand how data can be applied.
Ultimately, when deployed effectively, data products support faster testing and learning, enabling new AI capabilities to move into production with less incremental engineering effort.
With a large enterprise transformation I helped lead, we invested in building a robust library of AI agents and LLM-powered tools to support consultants with research, scheduling, and other workflows. The tools were technically strong and thoughtfully designed. However, adoption initially lagged. When we stepped back to diagnose the issue, we realised the constraint wasn’t the AI itself – it was the underlying data. It wasn’t structured or contextualised in a way the tools could reliably act on. That insight shifted our focus. We prioritised reformatting, organising, and enriching the data layer so the AI systems could operate with clarity and consistency. Once we strengthened that foundation, adoption and impact followed.
The broader takeaway for organisations beginning their AI journey is this: scalable AI requires usable, trusted, well-structured data from the outset. Advanced models can amplify value, but only if the data foundation is ready to support them.
Kanika Chaganty
Chief data officer at Brit Insurance
While an AI system is only as strong as the data it learns from, user trust and ethical use are essential.
We embed both transparency and trust in our AI rollout with data and a human centric approach. We focus on diverse and representative datasets to improve accuracy, and for internal and purchased solutions, we define risks, outcomes and intended use, ensuring due diligence.
Ensuring data is sufficient and quality checked is fundamental for training an AI model. There is an opportunity to leverage previously cleansed and labelled data through reusable data products to speed up AI deployment and make it more cost effective and uniform across initiatives.
Our transformation initiatives are helping us develop platforms to manage the operationalisation of AI solutions and systematically improve data granularity. By managing data through operationalised systems, we are able to implement data security, data ownership, quality checks, governance and controls systematically and consistently.
But AI adoption, at its core, comes down to people and adaptability. To support this, we have focussed on building AI skills within the organisation by offering regular training, encouraging hands-on use, and forming cross-functional teams to bring diverse perspectives to solution development. This is coupled with a focus on experimentation, enablement and operationalisation to help accelerate scaling.
Lastly, in a world where the AI landscape is rapidly evolving, data remains the key differentiator. Curating accurate, accessible datasets is foundational for scaling AI and driving real productivity gains and business outcomes.
Jonathan Lister Parsons
Chief technology officer at PensionBee
To move beyond fragmented data environments, you first need clarity of ownership and a system architecture that permits assembling data from multiple sources. We’ve focused on creating a unified, well-governed data platform rather than allowing siloed tools to proliferate.
Standardised data models ensure data is consumable by AI, and reliable infrastructure ensures the platform is secure and scalable. AI systems are only as strong as the data foundations beneath them.
Trust and governance must be built in from the outset, not retrofitted later. That means strong access controls, transparent audit trails and continuous monitoring of data quality. Clear accountability for datasets and models, alongside rigorous testing and human oversight is vital. In a regulated industry like pensions, explainability and security are non-negotiable – they’re central to maintaining customer confidence.
Reusable data products are critical to accelerating AI. By treating datasets, pipelines and models as reusable assets rather than one-off projects, we reduce duplication and increase speed. A use-case specific dataset makes it easier for AI to draw relevant inferences, and well-defined data products with clear documentation and ownership allow teams to innovate safely and efficiently, building new AI use cases on trusted foundations.
Platform convergence also plays a significant role. Consolidating tooling reduces operational complexity, lowers cost and improves interoperability. It allows engineering and data teams to collaborate more effectively, focusing on delivering customer outcomes.
Our key lesson is to prioritise fundamentals. Invest in data quality, governance and culture before chasing advanced AI use cases. Particularly culture. As a fast-moving and disruptive technology, you need everyone to share a similar appetite and vision. Start with clear, practical problems where AI can deliver measurable value, and scale from there. Above all, keep the focus on customer benefit – technology should enhance trust, transparency and simplicity, not add complexity.
Five steps to a trusted, intelligent data foundation
As AI adoption accelerates, these five steps outline how organisations can strengthen governance, integration and data foundations to bridge the gap between ambition and enterprise-ready execution
The artificial intelligence revolution, once just a futuristic promise, has rapidly evolved to become a reality. According to McKinsey & Company, 88% of organisations now use AI in at least one business function, up from 78% a year earlier.
But despite businesses’ desire to embrace AI, the gap between ambition and enterprise readiness is stark. Legacy systems, insufficient governance and crucially, messy or unreliable data can all hinder success, frequently leading to project failures.
For AI to be successful, businesses need to focus on building an intelligent data foundation, which will see data treated as a critical asset rather than an afterthought.
01
Establish upstream governance for ‘measured acceleration’
For AI systems to be dependable, organisations must incorporate governance and accountability measures into their data management processes from the outset.
By integrating governance early in the data lifecycle and systematically embedding it within the processes of data design, approval and evaluation, organisations can proactively manage risks, reducing potential issues in the future.
Dan Broadhurst, commercial director at Finova, says: “Too many banks are investing millions into AI models without first securing their most important asset: their data. If an AI model can only draw from cluttered folders and unstructured data, the result – to be blunt – is a total mess.
“When data is clean, standardised and clearly owned, teams can deploy models without constantly second-guessing the inputs or revalidating the outputs.”
Rather than hamper innovation, upstream governance supports responsible and measured AI acceleration, say experts.
Toby Thomas, director of research and corporate intelligence at S-RM, says: “Upstream governance is what stops AI becoming a risk multiplier rather than a productivity tool. Clear governance upfront sets boundaries on where AI can be used, what level of reliance is appropriate and who is accountable.
“That clarity actually accelerates adoption because teams know what is permitted, rather than experimenting in silos or hesitating due to uncertainty.”
02
Choose platforms that enable deep integration
Many large organisations depend on multiple applications that don’t talk to each other, forcing employees to manually transfer data.
For effective AI acceleration, business leaders should select platforms where governance, analytics and AI integrate smoothly to enable seamless data exchange across systems. Furthermore, as businesses expand, platforms with strong integration features will allow them to add new applications without needing to change their infrastructure.
“Platform choice shouldn’t be driven by novelty or individual teams experimenting in parallel,” says Thomas. “From a practical perspective, firms should prioritise platforms that integrate cleanly with existing systems and data flows, rather than creating standalone AI tools that don’t talk to each other.”
As a first step, Chris Palethorpe, client partner at 4most, recommends organisations define their business requirements in a clear and prescriptive way, enabling IT teams to select the right solution.
“This often involves balancing competing priorities across business units. Understanding where requirements are fixed and where there is flexibility is essential to making effective decisions.”
03
Implement semantic layers to align definitions across systems
The concept of semantic layers is not new, but it is fast evolving from a ‘nice-to-have’ to a critical component of data infrastructure.
As AI becomes more embedded in business workflows, it brings with it a new kind of risk: lack of context and consistency. At best this can lead to inaccuracies; at worst, costly outcomes.
Joseph Forooghian, UK head of data strategy and advisory at Capco, explains: “The biggest blocker to AI value isn’t the tools or the models, it’s the lack of context. Without context, data is easily misused or misunderstood.”
While humans possess critical knowledge and can interpret or fill gaps, AI models lack such institutional memory. That is where semantic layers become crucial: they clearly define what data means, how entities relate, and how terms are used in ways machines can comprehend.
04
Define reusable data products to reduce revalidation
Traditional data pipelines often create bottlenecks, making data products increasingly valuable. A data product goes beyond a simple data set; it is structured for use across multiple use cases and applications without re-engineering.
“The key is treating data less as a one-off input and more as a governed asset,” explains Thomas. “Reusable data products need consistent definitions, clear ownership and documented controls around how the data was created, validated and updated to be successful.
“When that groundwork is in place, teams don’t need to repeatedly re-check the same data every time it’s used. Instead, they can rely on agreed standards and lineage.”
Organisations that invest in well-defined data products will find it easier to scale AI, note experts.
Brad Novak, chief technology officer at Rathbones, says: “The highest return in AI for Rathbones isn’t the newest model, it’s the foundations. Reusable data products cut revalidation and maximise the value extracted from our AI models.”
For Palethorpe, effective data products should encompass both today’s critical processes and the data requirements needed to support future business growth.
“If data products are too narrow, business units will source data elsewhere, creating duplication and reducing control. If they are too broad, data quality and trust can become harder to maintain.
“Firms also need an operating model that allows products to evolve in line with business needs, ensuring they remain reusable and effective over time.”
05
Rationalise vendors, embed trust and track outcomes
Fewer platforms mean fewer opportunities for governance and accountability to break down. Going forward, organisations will need to prioritise deep integration over managing multiple vendors.
Trustworthy data is also vital; unreliable data can undermine AI projects and the risks are heightened with agentic AI, where poor data can lead to harmful outcomes.
Trust should be integrated throughout each stage of the data lifecycle, from initial collection to governance, security, and decision-making. By promoting accountability, organisations help ensure employees, stakeholders, and customers have confidence that their AI systems rely on reliable and accurate data.
Forooghian says: “Trust does not stop at the data. Stakeholders also need to understand how and why a model arrived at its output. When AI is embedded into operational processes, explainability becomes essential. Without clear traceability of how and why decisions are made by AI models, trust cannot scale, and without this, neither can outcomes.”
However, Thomas emphasises that embedding trust in AI takes time and intention.
“Firms should expect a short-term trust gap and plan for it, rather than assuming AI will deliver instant returns,” he explains, “Tracking outcomes also requires more than headline productivity metrics. Many benefits show up in small, previously unmeasured tasks so organisations need to be deliberate about how they assess value and avoid incentives that encourage over-use.”
AI success will not be determined by the sophistication of models alone, but by the strength of the data foundations that support them. Organisations that prioritise governance, integration, context and trust will be best placed to turn AI ambition into measurable, sustainable value.