
AI works, but does it scale? Despite significant investment in pilot projects, many FS&I firms are still struggling to shift from pilots to production mode. In fact, a 2025 MIT report found that 95% of corporate generative AI pilots fail to deliver measurable ROI or move beyond the testing stage. As a result, true transformation of operations and processes never materialises.
One of the biggest bottlenecks between experimentation and scale is a lack of trust in the data being fed into AI. “Data trust is the dominant constraint when it comes to experimenting and running AI pilots, but also in terms of leveraging and operationalising it,” says Glenda O’Keefe, field CTO at Quest Software.
In many respects, this lack of trust is understandable. Data issues can cause AI to produce false or biased results, potentially creating regulatory risks and reputational damage. Problems don’t just stem from data quality either: a lack of insight into freshness, fitness for purpose, explainability and lineage can also have serious repercussions for the business.
Lineage goes hand‑in‑hand with data quality because it provides the transparency needed to assess and maintain quality. Without understanding the full journey of the data — where it originated, how it was transformed, you need visibility into the full life cycle from source to decision.
“Regulatory compliance is also raising the bar, especially with the EU AI Act. Organisations now need clear visibility into what data they hold, where it resides, and who is accessing or using it. The level of internal accountability required today is significantly higher than it has ever been.”
Today, however, it’s often unclear who ultimately owns certain data sets and is responsible for their quality and lineage. Lacking this knowledge, business users struggle to determine whether the data is safe to use, while outdated or unclear governance models only add to the problem. At the executive level, this fragmentation undermines confidence in analytics and makes decision-making harder.
The risks of mistakenly feeding AI models ‘bad’ data have further increased with the introduction of the EU AI Act, which requires firms deploying high-risk AI systems to comply with strict obligations around transparency, documentation, risk mitigation, and human oversight. Non-compliance can result in penalties of up to 7% of global annual turnover.
Tackling fragmentation
Meeting these requirements is particularly challenging for firms operating in fragmented data environments built on multiple point solutions. “It creates issues not just in the back end, but also at the executive level. Leaders are trying to make decisions based on analytics derived from that data,” O’Keefe says.
“Teams spend huge amounts of time repeatedly gathering and cleaning the same data for different use cases, slowing delivery and creating inconsistent results. At the executive level, this fragmentation undermines confidence in analytics and makes decision‑making significantly harder,” she adds.
Bolt-on governance can’t account for all these issues. The solution is rather integrated governance that covers the entire lifecycle of data products, pipelines and models, which enables measured acceleration by balancing speed with risk. Rather than viewing governance as a compliance burden or policing function, leading FS&I firms therefore position it at the heart of their AI strategy, and empower collaboration across engineering, back office, technical and business teams to ensure it remains true to evolving business strategies.
Quest’s Trusted Data Management Platform is an integral part of achieving governance that can help directly connect data to the desired business outcomes of an organisation. It provides the tools needed to bring visibility, quality and context to data and embed the trust needed to scale AI. It’s able to bridge the gap between technical IT processes and business needs by making data products and AI models easy to discover, evaluate and use. It also provides full insight into what data exists across the organisation, what it means in a business context, and whether data and models have drifted beyond quality thresholds.
The data marketplace
Crucially, the platform’s data marketplace function enables business users to find, compare, and request access to the data products they need to develop high-quality, trusted solutions. This marketplace for data is built around a consumer-like experience, offering quick insight into how fresh data is, user comments, star ratings, automated trust scores and usage frequency.
Trusted data is now the key to AI success
“We’ve brought the e-commerce concept into our marketplace, which makes it easy to ensure that everyone has access to good, governed data products,” O’Keefe explains.
The platform also links technical metadata to a business glossary so that everyone is using the same definitions for key business terms. As trust depends on knowing where data came from, it automates data lineage too, showing the journey of data from its source and through every transformation to its final destination. AI/ML-powered anomaly detection also monitors data in real-time. If a data pipeline delivers data that is ‘off’, stakeholders are immediately alerted so that the problem can be addressed before it does critical harm.
Data that is properly certified and easy to discover and evaluate can be used many times across different models and use cases. These reusable data assets can help to accelerate projects and scale them faster, ultimately enabling FS&I firms to innovate at speed without introducing unnecessary risk or duplicated work. In fact, Quest claims that a unified platform approach can deliver trusted data products up to 54% faster than traditional approaches.
The company’s recently released Automated Data Product Factory promises to further enhance the way data is handled. “It is the industry’s only AI-driven capability designed to transform how organisations define, design, and operationalise data products. “One of the biggest barriers to scaling data products is complexity,” says O’Keefe. “Traditionally, if a business user wants a new data product, they have to understand modelling tools, write detailed technical specifications, and work through multiple handoffs with engineering. That slows everything down.”
“With the Automated Data Product Factory, we remove that friction. Business users don’t need to learn modelling tools or draft technical documents. Both business and technical teams can effectively have a conversation with the data in natural language by framing the use case they’re trying to solve with AI,” O’Keefe says. “You just type in your use case or business challenge, and the AI will search across your existing curated data assets and provides a production-ready data product in a fraction of the time.”
The platform’s AI interprets that intent and automatically constructs the logical data product model. Rather than starting from scratch each time, it prioritises existing enterprise and industry-standard models before generating anything new. This prevents the creation of redundant, one-off data products and reinforces consistency across the organisation.
“The result is not just faster delivery, it’s standardisation at scale, reusability, reduced duplication and a data product ecosystem that stays aligned with enterprise architecture,” she adds.
As AI adoption accelerates, a trusted data foundation will ultimately play an ever bigger role in how well FS&I firms are able to scale the technology. Organisations that pull together AI into a single platform and pursue a path of reusable data products are likely to save countless hours, enabling teams to handle more projects with the same staff and scale output without significant cost increases. In short, trusted data is now the key to AI success in 2026.
What defines an AI leader?
AI leadership is less about adopting the latest tools and more about building the organisational foundations that allow innovation to scale safely. The firms pulling ahead are those that treat trusted data, clear accountability and executive alignment as strategic capabilities rather than technical afterthoughts. In an era of tightening regulation and rising expectations around AI governance, these qualities increasingly separate experimentation from real transformation.
Organisations that successfully scale AI have several common traits, including executive alignment and engagement with AI strategy, and cross-functional collaboration between technology and business teams. “The C-level must be actively involved when you’re talking about scaling and operationalising AI,” says O’Keefe. “Without executive sponsorship, initiatives stall before they deliver real impact.”
AI pilots that scale successfully also have strongly defined business outcomes. “You need to understand what the final outcome is and then work backwards from that,” O’Keefe advises. In practice, this means that rather than beginning with technology choices, AI leaders first identify the business problem they want to solve, define what success looks like, and build a roadmap that will get them there.
Identifying and empowering data champions is equally important for scaling AI. These individuals can help to bridge technical and business perspectives, making them invaluable for driving AI adoption and building trust. “You need to know who your data champions are, as they play a vital role in change management and adoption,” O’Keefe notes.
AI leaders also excel at making data products discoverable and easy to use, while maintaining proactive monitoring of data and models. They store products in a central catalog with clear descriptions, ownership, lineage, and quality metrics. They also embed governance into every stage of the AI lifecycle, from design to deployment.
For more information please visit www.quest.com
AI works, but does it scale? Despite significant investment in pilot projects, many FS&I firms are still struggling to shift from pilots to production mode. In fact, a 2025 MIT report found that 95% of corporate generative AI pilots fail to deliver measurable ROI or move beyond the testing stage. As a result, true transformation of operations and processes never materialises.
One of the biggest bottlenecks between experimentation and scale is a lack of trust in the data being fed into AI. “Data trust is the dominant constraint when it comes to experimenting and running AI pilots, but also in terms of leveraging and operationalising it,” says Glenda O’Keefe, field CTO at Quest Software.
In many respects, this lack of trust is understandable. Data issues can cause AI to produce false or biased results, potentially creating regulatory risks and reputational damage. Problems don’t just stem from data quality either: a lack of insight into freshness, fitness for purpose, explainability and lineage can also have serious repercussions for the business.