According to a recent survey by Lloyds, half of UK financial institutions plan to increase their AI investment this year. US banks and major firms in other regions have also spent billions on the technology in an effort to radically improve productivity, operational efficiency and strategic insight. But as enterprises attempt to move from AI experimentation to usage at scale, some are finding that a fundamental rethink of data management is needed. In fact, Gartner forecasts that through 2026, organisations will abandon 60% of AI projects unsupported by AI-ready data.
“Data trust is the dominant constraint when it comes to experimenting and running AI pilots, but also in terms of leveraging and operationalising it,” says Glenda O’Keefe, field CTO at Quest Software. “The success of AI is only as good as the quality of its data and the trustworthiness of the data behind it. Data trust is the foundation of everything.”
There are several clear reasons why. Nearly one-third of respondents to a recent McKinsey survey reported negative consequences stemming from AI inaccuracy, which is generally down to issues with the data underpinning the model. The emergence of agentic AI – autonomous agents that don’t just answer questions but take actions independently – has added further impetus to the need to address data problems that could undermine the effectiveness and accuracy of AI.
AI ambition is being hampered by a lack of trust in data
It’s also important to note that trusted data is about more than just data quality. For example, ‘trust’ actually encompasses the entire lineage of data – i.e. where data came from and how it’s been transformed – not just whether datasets appear accurate and usable.
This trust can easily be undermined by excessive data management point solutions, which create silos and blind spots that prevent AI from operating reliably at scale. “Working in silos, having multiple point solutions creates a lot of issues in terms of transparency and accountability because you end up with multiple data sets and therefore multiple versions of the truth. This causes problems not just in the back end, with the data stewards, but also at the executive level, because they’re trying to make decisions based on the analytics coming from that data,” O’Keefe says.
Often it is unclear who owns data or is ultimately responsible for its quality, making it hard to find, utilise and trust. Together with data inconsistencies, poor insight into lineage, and a general lack of good governance, this can result in missed business opportunities, regulatory issues or reputational damage. In fact, even when governance is strong during the initial deployment of AI projects, data drift can still occur when a model trained on a certain dataset is then used in a different context.
At the same time, data preparation and validation processes must also be fast enough to support innovation. Bolt-on governance added as an afterthought to existing systems tends to create bottlenecks that slow things down. FS&I firms at the forefront of AI innovation, on the other hand, embed governance upstream so that it covers every stage of the AI lifecycle without overly restricting the ability to scale projects.
Unified data platforms
Fast-evolving regulation around AI means that few organisations can afford to ignore data issues that could lead to bias or false results. The EU AI Act, for example, requires firms deploying high-risk AI systems to meet strict obligations around transparency, documentation, risk mitigation and human oversight. Non-compliance can result in penalties of up to 7% of global annual turnover. In short, firms need to know exactly what data they have, where it’s held, who’s using it, and for what purpose.
The Quest Trusted Data Management Platform is the industry’s first and only unified, SaaS-native solution purpose-built for delivering trusted, AI-ready data at speed and scale. It unifies and automates five core capabilities – data modelling, data cataloguing, data governance, data quality, and a data marketplace – helping customers save months of work, millions of dollars, and achieve faster time to value through the automated creation and delivery of trusted data products. Quest also links technical metadata to a business glossary, thereby ensuring everyone uses the same definitions for key terms. “AI can provide extended value by helping everyone to sing from the same hymn sheet through definitions, tagging, classifications, etc.,” says O’Keefe.
A data marketplace function, meanwhile, enables business users to easily find and compare data sets and models. It’s designed to provide an ecommerce-like experience, with quick insights into how fresh data is, user comments, star ratings, popularity, and automated data trust scores (organisations can assign weighted importance to asset factors such as data quality, governance completeness and business relevance). All of this makes it easy for both business consumers and data teams to surface and identify high-value, well-governed and ultimately trusted assets.
Reusable data products
Making data products reusable is a central tenet of the platform. Data that is properly certified and easy to discover and evaluate can be reused across multiple models and use cases. Technical debt decreases too, as a smaller set of maintained products keeps the organisation’s data ecosystem cleaner and more adaptable.
McKinsey analysis also highlights the fact that many of the costs from developing a data product are one-time investments. At one telco, for example, an estimated 60-80% of a data team’s time spent finding, preparing, and performing quality assurance on data for an initial data product covered one-time efforts that didn’t then need to be repeated for each new business case.
“Once organisations have access to existing, curated, trusted data products, they can stop rebuilding the same logic over and over again. Instead of spending weeks or months gathering, cleaning, and modelling data, teams can build on top of what already exists,” O’Keefe explains.
The McKinsey article also notes that: “At one international consumer company, the point when a data product enabled five use cases meant its projected cost was about 30% less than building individual data pipelines for five analytical solutions. When that data product was then scaled to another market, projected costs were about 40% lower. This cost reduction stemmed not only from the reuse of the standardised data product, but also from the experience that the data product team had accrued.”
The Automated Data Product Factory, part of the Quest Trusted Data Management Platform, delivers data products 54% faster than before, using AI to automate the creation of data products using natural-language prompts. This feature enables organisations to scale data product development, reducing the time to create a trusted data product from weeks into days, saving organizations significant cost – potentially millions of dollars.
Trust encompasses the entire lineage of data, not just whether datasets appear accurate
The platform’s role-based security features also ensure that sensitive data products can only be reused by authorised parties. Automated data lineage capabilities also track the journey of data from its source and through every transformation to its final destination, while AI/ML-powered anomaly detection is also used to monitor data in real-time and immediately alert stakeholders of any issues, preventing ‘bad’ data from skewing results.
“Business and technical teams can have a conversation with the data based on a use case or business problem they’re trying to solve,” says O’Keefe. “You simply type in your use case and your business challenge, and then AI will search for the existing curated data assets that you have in your catalogue, and present back the data products that it thinks you should leverage for that particular challenge. It’s also about reusability, not having to start from scratch every time.”
Ultimately, the FS&I industry’s AI problem is clear: ambition is being hampered by a lack of trust in data and the speed and scale that AI requires. Managing organisational data through a unified platform not only makes it easier for firms to overcome this barrier and govern, trust and deploy AI at scale, it also ensures they are well-positioned to respond to future regulatory requirements and keep pace with competitors that are rebuilding their data foundations for the AI era. In other words, better data management is really the key to making recent investment in AI pay off.
Five foundations to build data AI can trust
To scale AI successfully, organisations must get their data foundations right. These key takeaways highlight how to build trust, governance and value at scale.
Trusted data extends far beyond traditional quality metrics. It requires comprehensive lineage, upstream governance and transparency about how data has been transformed and used. Without this foundation, AI could produce biased or false results, leading to compliance and reputational risks as well as missed business opportunities.
Leading firms are succeeding by embedding governance throughout the AI lifecycle rather than bolting it on as an afterthought. They’re investing in unified platforms that combine data quality, modelling, metadata management and governance, and shifting away from overlapping point solutions that create data siloes and unnecessary operational complexity.
Reusable data products that can be certified once and deployed across multiple use cases can help firms to achieve trustworthy data and confidently scale autonomous AI agents, which require automated governance and real-time data trust to perform safely.
Governance can be improved in an incremental rather than disruptive way. FS&I firms can start with critical data in one department, establish clear data product ownership, and scale gradually from there.
By bringing technical and business users together around discoverable, trusted data products, organisations can bridge the gap between AI potential and measurable business value – scaling innovation without compromising speed, compliance or confidence in underlying data.
For more information please visit www.quest.com
According to a recent survey by Lloyds, half of UK financial institutions plan to increase their AI investment this year. US banks and major firms in other regions have also spent billions on the technology in an effort to radically improve productivity, operational efficiency and strategic insight. But as enterprises attempt to move from AI experimentation to usage at scale, some are finding that a fundamental rethink of data management is needed. In fact, Gartner forecasts that through 2026, organisations will abandon 60% of AI projects unsupported by AI-ready data.
“Data trust is the dominant constraint when it comes to experimenting and running AI pilots, but also in terms of leveraging and operationalising it,” says Glenda O’Keefe, field CTO at Quest Software. “The success of AI is only as good as the quality of its data and the trustworthiness of the data behind it. Data trust is the foundation of everything.”
There are several clear reasons why. Nearly one-third of respondents to a recent McKinsey survey reported negative consequences stemming from AI inaccuracy, which is generally down to issues with the data underpinning the model. The emergence of agentic AI – autonomous agents that don’t just answer questions but take actions independently – has added further impetus to the need to address data problems that could undermine the effectiveness and accuracy of AI.
AI ambition is being hampered by a lack of trust in data