
In an increasingly technology-driven world, business leaders face growing pressure to deliver on the promise of artificial intelligence. Yet, many initiatives stall, with businesses struggling to move from complex and siloed data environments to trusted, scalable AI systems that deliver real impact.
We know that a staggering 95% of corporate generative AI projects fail to deliver, and for many the problem isn’t technology, but data. MIT points out that only a tiny fraction of enterprise data is AI-ready. Not only is this slowing businesses’ ability to keep pace with technological developments, but it spells trouble for their balance sheet too, with Deloitte estimating that 80% of companies suffer income loss due to poor data quality.
What lessons can businesses learn when it comes to mastering data complexity? Here, three experts from the financial services and insurance sector offer their insight and experience.
As chief data officer, one of the most important lessons I’ve learned is that AI scale-up doesn’t start with models, it starts with data readiness.
To be truly AI-enabled, companies must ensure their data is reliable and well-governed. Governance is shifting from manual, document-heavy processes to automated, embedded controls, effectively treating governance as code. As innovation accelerates, governance, compliance and model monitoring are increasingly built into the technology itself rather than managed separately.
Simplifying complex data is also key as companies look to make AI scalable. Simplification begins with strong classification and shared definitions. Clearly identifying what data exists, where it resides, and how it should be used eliminates ambiguity. Structuring information into governed data products reduces reliance on competing “sources of truth,” creating a trusted foundation for both people and AI systems.
Similarly, trusted, well-documented data products allow teams to confidently leverage data for analytics or AI initiatives without creating foundational work. Reusable data products modularise and organise data to enable greater self-service across the organisation. When properly contextualised, with clear definitions, common language, and quality standards, they expand how data can be applied.
Ultimately, when deployed effectively, data products support faster testing and learning, enabling new AI capabilities to move into production with less incremental engineering effort.
With a large enterprise transformation I helped lead, we invested in building a robust library of AI agents and LLM-powered tools to support consultants with research, scheduling, and other workflows. The tools were technically strong and thoughtfully designed. However, adoption initially lagged. When we stepped back to diagnose the issue, we realised the constraint wasn’t the AI itself – it was the underlying data. It wasn’t structured or contextualised in a way the tools could reliably act on. That insight shifted our focus. We prioritised reformatting, organising, and enriching the data layer so the AI systems could operate with clarity and consistency. Once we strengthened that foundation, adoption and impact followed.
The broader takeaway for organisations beginning their AI journey is this: scalable AI requires usable, trusted, well-structured data from the outset. Advanced models can amplify value, but only if the data foundation is ready to support them.
While an AI system is only as strong as the data it learns from, user trust and ethical use are essential.
We embed both transparency and trust in our AI rollout with data and a human centric approach. We focus on diverse and representative datasets to improve accuracy, and for internal and purchased solutions, we define risks, outcomes and intended use, ensuring due diligence.
Ensuring data is sufficient and quality checked is fundamental for training an AI model. There is an opportunity to leverage previously cleansed and labelled data through reusable data products to speed up AI deployment and make it more cost effective and uniform across initiatives.
Our transformation initiatives are helping us develop platforms to manage the operationalisation of AI solutions and systematically improve data granularity. By managing data through operationalised systems, we are able to implement data security, data ownership, quality checks, governance and controls systematically and consistently.
But AI adoption, at its core, comes down to people and adaptability. To support this, we have focussed on building AI skills within the organisation by offering regular training, encouraging hands-on use, and forming cross-functional teams to bring diverse perspectives to solution development. This is coupled with a focus on experimentation, enablement and operationalisation to help accelerate scaling.
Lastly, in a world where the AI landscape is rapidly evolving, data remains the key differentiator. Curating accurate, accessible datasets is foundational for scaling AI and driving real productivity gains and business outcomes.
To move beyond fragmented data environments, you first need clarity of ownership and a system architecture that permits assembling data from multiple sources. We’ve focused on creating a unified, well-governed data platform rather than allowing siloed tools to proliferate.
Standardised data models ensure data is consumable by AI, and reliable infrastructure ensures the platform is secure and scalable. AI systems are only as strong as the data foundations beneath them.
Trust and governance must be built in from the outset, not retrofitted later. That means strong access controls, transparent audit trails and continuous monitoring of data quality. Clear accountability for datasets and models, alongside rigorous testing and human oversight is vital. In a regulated industry like pensions, explainability and security are non-negotiable – they’re central to maintaining customer confidence.
Reusable data products are critical to accelerating AI. By treating datasets, pipelines and models as reusable assets rather than one-off projects, we reduce duplication and increase speed. A use-case specific dataset makes it easier for AI to draw relevant inferences, and well-defined data products with clear documentation and ownership allow teams to innovate safely and efficiently, building new AI use cases on trusted foundations.
Platform convergence also plays a significant role. Consolidating tooling reduces operational complexity, lowers cost and improves interoperability. It allows engineering and data teams to collaborate more effectively, focusing on delivering customer outcomes.
Our key lesson is to prioritise fundamentals. Invest in data quality, governance and culture before chasing advanced AI use cases. Particularly culture. As a fast-moving and disruptive technology, you need everyone to share a similar appetite and vision. Start with clear, practical problems where AI can deliver measurable value, and scale from there. Above all, keep the focus on customer benefit – technology should enhance trust, transparency and simplicity, not add complexity.
In an increasingly technology-driven world, business leaders face growing pressure to deliver on the promise of artificial intelligence. Yet, many initiatives stall, with businesses struggling to move from complex and siloed data environments to trusted, scalable AI systems that deliver real impact.
We know that a staggering 95% of corporate generative AI projects fail to deliver, and for many the problem isn’t technology, but data. MIT points out that only a tiny fraction of enterprise data is AI-ready. Not only is this slowing businesses’ ability to keep pace with technological developments, but it spells trouble for their balance sheet too, with Deloitte estimating that 80% of companies suffer income loss due to poor data quality.
What lessons can businesses learn when it comes to mastering data complexity? Here, three experts from the financial services and insurance sector offer their insight and experience.