
Schools, colleges, community hubs and workplaces across the UK will soon offer AI-skills courses to all citizens, as part of a government-led initiative to train one-fifth of UK workers in the technology.
Keir Starmer, the prime minister, has positioned AI at the centre of the government’s industrial strategy, with the aim of making the UK an “AI superpower”. To achieve that goal, the government has organised a nationwide skills drive to develop home-grown AI talent. It’s an ambitious plan. The the aim is to equip 7.5 million citizens with the skills to use generative AI effectively, with local service providers selected to deliver the plans in all the UK’s nations and regions.
Many of the hyperscalers are involved in the Government-Industry AI Skills Partnership, as it is called. Microsoft, Amazon, Google and IBM have met with Peter Kyle, the UK tech secretary, to advise on the creation of AI training modules. Technologists, too, have praised the initiative.
Alexandra Dobra-Kiel, the innovation and strategy director at Behave, a consultancy, says the government’s efforts are timely and necessary. “Equipping people with the skills to engage with, rather than be displaced by, these technologies is crucial and the collaboration with major tech firms shows a serious commitment to preparing the workforce for a rapidly changing economy.”
However, half of low-income families do not have access to digital technologies, according to analysis by the Digital Poverty Alliance. What’s more, research suggests that half of working-age adults cannot complete all 20 of the foundational digital tasks deemed essential for modern work, and nearly 8 million UK adults have no basic tech skills at all. Is the government putting the cart before the horse?
Layered training essential for AI upskilling
Comfort and engagement with AI tools varies significantly across the population. To successfully upskill such a large proportion of citizens, therefore, providers must tailor their training modules appropriately, ensuring everything from foundational principles of AI to advanced uses is covered.
That’s according to Richard Giblin, head of public sector and defence at SolarWinds, an IT company. “Core digital literacy, coding and data skills need to be in place before we can meaningfully scale AI expertise across the workforce.” Once those fundamentals are mastered, he adds, organisations should take a layered approach to AI upskilling. At the most basic level, professionals must understand what AI can and can’t do, along with its practical applications and ethical implications.
Although GenAI is constantly improving, it remains prone to wild hallucinations and biases. And, because these systems often work to merely validate users’ hypotheses, in the most extreme cases, GenAI platforms have encouraged delusions among their users.
Those seeking to use the tech effectively, therefore, must first understand how GenAI actually works, says Stephan Reiff-Marganiec, head of computing and engineering at the University of Derby. AI providers are all too happy to let magical thinking swirl around their software. Some have even issued premature statements about machine sentience. But, he explains, GenAI tools are really “just clever pattern-matching [machines]”.
Foundations first
Nationwide upskilling initiatives must focus on helping users to “understand how the AI tools work and what’s sitting behind them – how they’re using the data that you’re putting in and how you can control that”, Reiff-Marganiec says. This means training people to use AI ethically and interrogate its outputs, which could help to boost critical-thinking skills across the population, an additional benefit of any AI-upskilling initiative.
Only when these basics are covered should courses proceed to advanced skills, such as prompt engineering, coding or other industry-specific uses. Here, too, learners should be encouraged to limit their use of AI to tasks it is best suited to handle, such as data collection or search.
According to Reiff-Marganiec, AI isn’t a shortcut, it’s a “tool that can guide you and give you ideas that you might want to look at. But you need to put your own understanding into it to push it in the right direction.”
Giblin agrees, adding that any organisation seeking measurable benefits from AI adoption must view the technology holistically and avoid treating it as a shiny new tool. “AI can’t operate in isolation,” he says. “We will still need IT professionals to provide the infrastructure, cybersecurity and oversight that underpin AI adoption, checking for accuracy, compliance and long-term sustainability. Building AI skills must go hand-in-hand with strengthening complementary IT disciplines.”
The government is right to prioritise AI, he says, but foundational skills must not be overlooked. AI training should focus on ethical and practical uses of the technology. Only then can we really “future-proof the workforce”, he argues.
Towards a public AI
It’s reasonable to expect any government-led upskilling initiatives to have the public good in mind. The tech secretary has been criticised for his perceived cosiness to Silicon Valley. And, while Dobra-Kiel acknowledges that big-tech firms should be included in the government’s efforts, she cautions that their role must be to contribute to the curriculum, not to control it.
“Big tech can provide valuable insights on real-world tools and rapidly evolving technologies,” she says. “So their involvement is essential for practical relevance. But allowing them to dictate the curriculum risks embedding commercial interests, tool-specific training and vendor lock-in, teaching people to use products, not principles. The curriculum should reflect public interest, not just corporate roadmaps.”
Because technology evolves so quickly, traditional training modules will inevitably fail to keep pace with the subject matter, Dobra-Kiel explains. Courses therefore should aim to develop durable learning habits rather than “ticking off toolkits”, she says.
Any initiatives must cut across functions and disciplines. “Treating AI as a purely technical subject is a mistake because AI touches law, ethics, design, politics and power,” says Dobra-Kiel. “We don’t just need more prompt engineers, we need teachers, careworkers and shop-floor staff who know how to question an algorithm’s bias or understand data privacy rights.”
She emphasises that publicly funded training must not stop at “glossy success stories”. Training providers should ask whether they are building capabilities or merely credentialling. “Accountability requires opening the black box of training outcomes, not just celebrating intentions.”
That might mean examining some uncomfortable data, but doing so can help educators to better understand who is benefitting from the training – and who’s being left behind.

Schools, colleges, community hubs and workplaces across the UK will soon offer AI-skills courses to all citizens, as part of a government-led initiative to train one-fifth of UK workers in the technology.
Keir Starmer, the prime minister, has positioned AI at the centre of the government's industrial strategy, with the aim of making the UK an “AI superpower”. To achieve that goal, the government has organised a nationwide skills drive to develop home-grown AI talent. It's an ambitious plan. The the aim is to equip 7.5 million citizens with the skills to use generative AI effectively, with local service providers selected to deliver the plans in all the UK’s nations and regions.
Many of the hyperscalers are involved in the Government-Industry AI Skills Partnership, as it is called. Microsoft, Amazon, Google and IBM have met with Peter Kyle, the UK tech secretary, to advise on the creation of AI training modules. Technologists, too, have praised the initiative.