
When Lee Holmes, the chief executive of Infinox, a fintech and trading platform, was recruiting for a senior leadership role, he passed on one candidate with previous experience as a CIO because of their approach to AI adoption. The problem was not that they lacked confidence or ambition with AI – quite the opposite.
“They spoke almost exclusively about AI,” Holmes recalls. “Their entire vision for transformation, productivity and even culture, was pinned to it.”
He wanted instead a technology leader who saw AI as part of a broader digital ecosystem, alongside data strategy, cybersecurity, automation and so on. “When every solution begins and ends with AI, you risk putting all your eggs in one basket,” he explains.
Holmes’ experience is certainly not unique. Recent research by Riverbed, an IT services company, found that businesses might be getting ahead of themselves when it comes to AI. Of the 1,200 IT leaders surveyed, 87% said that they had already seen a return on investment from AI initiatives, despite only 12% of projects having been fully deployed and with 62% still at the pilot stage.
Simon Noble, CEO of Cezanne HR, blames misplaced expectations on “executives being punch drunk on the AI gold rush”. CIOs are keen to “tell the C-suite an AI story” and are mistaking prototypes for finished products. “The cost-savings narrative is premature,” Noble says. “CIOs are signing off on big projects with even bigger budgets, assuming AI will slide neatly into processes and start saving money overnight – it won’t.”
The governance gap
The rush to invest in AI is exacerbating the AI-governance gap, according to a report by the British Standards Institution (BSI), published in October. Whether such oversight is the cause or result of overconfidence is unclear, but the UK standards body warns that AI won’t be a “panacea for sluggish growth [and] productivity” unless guardrails are established.
Meanwhile, a recent AI pulse survey by EY found that CIOs’ high confidence in their responsible AI practices actually is often misplaced. C-suite executives who hadn’t yet fully implemented AI expressed greater concern (47%) over the privacy and security of their AI systems than did those who had fully integrated AI (21%).
So confidence in the technology may develop as AI adoption matures. Still, some CIOs may be unaware of the guardrails that are or aren’t in place, according to the EY report.
Make sure your data is in order
Seb Kirk, CEO of GaiaLens, an AI solutions platform, isn’t surprised by these findings. Overconfidence can stem from a “misunderstanding what AI actually does and, more importantly, what is required to create value”, he says. CIOs often underestimate the challenges of ensuring regulatory compliance or aligning AI with existing data structures and governance frameworks.
The BSI research echoes this. Only four in 10 of the 850 business leaders surveyed have clear processes in place around the use of confidential data in training, while just 28% know the sources of the data used to train AI tools.
Tech chiefs should take a “data-first, deployment-second” approach, says Kirk. To operate safely and effectively, AI systems require clean, consistent and high-quality data. CIOs therefore must develop clear, GDPR-compliant policies around data privacy, collection, security and storage. “They need to establish clear accountability and ensure that every automated decision can be traced, audited and explained,” advises Kirk.
Secure your systems
Randolph Barr, information and security chief at Cequence Security, a software company, agrees that organisations must establish clear governance and security frameworks before deploying AI tools.
He recalls his own experience working as a CIO: “One of the biggest problems we had to deal with before the AI [adoption] surge was enhancing our IT stack so that automation and integration could boost productivity. The same applies now, but the stakes are much higher.”
Barr advises CIOs to use ‘AI gateways’ – centralised platforms through which to manage, govern and secure corporate tools, APIs and large language models. “Gateways enable CIOs to define standard rules and observe how AI is used across their whole company ecosystem. This is better than having each team wire AI into their own workflows,” he adds. It can provide CIOs with assurance that their AI systems are performing reliably and securely.
Fill the skills gaps
No matter how secure systems are, the long-term success of any AI project will depend on the ability of employees to use AI tools effectively.
According to CIO.com’s 2025 State of the CIO report, published in May, 36% of CIOs planned to increase AI hires over the next six to 12 months. But many were concerned about the availability of candidates with the desired expertise in AI, cybersecurity and data science and analytics.
The onus, then, is on CIOs to build out their IT talent pipeline through reskilling and upskilling. To start such an initiative, Claus Jepsen, tech chief at Unit4, an enterprise cloud company, recommends conducting a skills audit. “Doing so will help CIOs to build out their team’s skills, fill the gaps and qualify what new expertise they must add to their organisation,” says Jepsen.
The right talent will be able to interrogate data and ensure that suitable quality, governance and security measures are in place, he adds. Those foundations may be enough to justify CIOs’ confidence that their AI systems are being implemented responsibly.
When Lee Holmes, the chief executive of Infinox, a fintech and trading platform, was recruiting for a senior leadership role, he passed on one candidate with previous experience as a CIO because of their approach to AI adoption. The problem was not that they lacked confidence or ambition with AI – quite the opposite.
“They spoke almost exclusively about AI,” Holmes recalls. “Their entire vision for transformation, productivity and even culture, was pinned to it.”
He wanted instead a technology leader who saw AI as part of a broader digital ecosystem, alongside data strategy, cybersecurity, automation and so on. “When every solution begins and ends with AI, you risk putting all your eggs in one basket,” he explains.




