Will the new national strategy make the UK an AI superpower?
Westminster’s new AI strategy is a step in the right direction, but there are hurdles – particularly concerning regulation, data-sharing and skills – that could hinder the UK’s progress
In the global AI investment, innovation and implementation stakes, the UK lies in a creditable third place. Trailing the US and second-placed China, it holds a slight lead over Canada and South Korea, according to the Global AI Index published in December 2020 by Tortoise Media. The moral of Aesop’s most famous fable involving a tortoise may be ‘more haste, less speed’, but Westminster is seeking to hare ahead in this race over the coming decade. Its national AI strategy, published in September 2021, is a 10-year plan to make the country an “AI superpower”. But what does that mean exactly?
Although Westminster has already poured more than £2.3bn into AI initiatives since 2014, this strategy will accelerate progress, promises Chris Philp, minister for technology and the digital economy at the Department for Digital, Culture, Media and Sport.
“It’s a hugely significant vision to help the UK strengthen its position as a global science superpower and seize the potential of modern technology to improve people’s lives and solve global challenges such as climate change,” he declares.
The Croydon South MP explains that the strategy has three main aims. These are to ensure that the country invests in the long-term growth of AI; that the technology benefits every sector of the economy and all parts of the country; and that its development is governed in a way that protects the public and preserves the UK’s fundamental values while encouraging investment and innovation.
“We have heard repeatedly from people working in and around AI that these issues are entirely connected,” says Philp, hinting at the complexity of the task at hand.
What will life be like for people living and working in an AI superpower? “There are huge opportunities for the government to capitalise on this technology to improve lives,” he says. “We can deliver more for less and give a better experience as we do so. For people working in the public sector, it could mean a reduction in the hours they spend on basic tasks, which will give them more time to find innovative ways of improving public services.”
Philp continues: “For businesses, we want to ensure that there are clear rules, applied ethical principles and a pro-innovation regulatory environment that can create tech powerhouses across the country.”
AI will also be crucial in helping the UK to meet its legal obligations to achieve net-zero carbon emissions by 2050. Pleasingly for Philp, progress is already being made in this field. He notes that the Alan Turing Institute has been “exploring AI applications that could help to improve power storage and optimise renewable energy deployment by feeding solar and wind power into the national grid”.
The strategy has been generally well received in the tech world, with most people acknowledging that it’s an important step in the right direction. But some experts have identified a few potential shortcomings.
Peter van der Putten is assistant professor of AI and creative research at Leiden University and director of decisioning and AI solutions at cloud software firm Pegasystems. He is “encouraged to see a shift from broad strategic statements to more concrete, action-oriented recommendations”, but he would have preferred to see a more complete ethical framework for AI application.
“A large portion of the document focuses on AI governance, but it appears that a lot of the emphasis is still on analysis, discussion and policy-making. There is less on proposing hard legislation or determining which authority will be accountable for governance,” van der Putten explains. “This is an area in which the UK will need to accelerate, given that both the EU and China have made relatively concrete proposals for the regulation of AI recently.”
Liz O’Driscoll is head of innovation at Civica, a supplier of software designed to improve the efficiency of public services. She believes that the UK has “made great progress so far, with many organisations starting to embrace data standards and invest in data skills. But the artificial elephant in the room is human resistance to data-sharing. Privacy remains crucial, especially when it comes to citizens’ information, but wider uncertainty about issues such as regulation, public perception and peer endorsement will also prompt many in the public sector to play it safe with AI.”
There are some encouraging signs that people’s general reservations about data-sharing are softening, thanks to the success of collaborative AI solutions during the Covid crisis, O’Driscoll adds.
“Sharing data has been essential in our defence against the virus. It has enabled key public services to stay focused on people who are most at risk,” she says. “Success stories have entered the public domain, so we need to make the most of these cases and continue driving further positive change.”
It’s clear that more education about the benefits of data-sharing and work on AI ethics are required, but could a shortage of recruits prove to be the most significant challenge for the national AI strategy? A survey published by Experian in September indicates that more than two-thirds (68%) of UK students wrongly believe that they would need to earn a STEM qualification to stand a chance of landing a data-related job.
Dr Mahlet Zimeta, head of public policy at the Open Data Institute, thinks that the widely held view that “the UK needs to produce more people who can code” is unhelpful at best.
“Although improving data literacy is important, we’re going to need a much broader range of skills, including critical thinking,” she argues. “Leaders require a change of mindset to maximise the potential of AI. At the moment, it feels as though no one wants to be the first mover, but this is why experimenting and being transparent about the results will drive progress.”
From the government’s perspective, Philp urges both “students and businesses to equip themselves with the skills they’ll need to take advantage of future developments in AI”. For employers, this will include ensuring that their staff “have access to suitable training and development opportunities”, he adds, pointing out that the government’s online list of so-called skills bootcamps is an excellent place to start.Tortoise Media’s Global AI Index ranks the UK fourth in the world on its supply of talent and third for the quality of its research. The country is a relative laggard in terms of both infrastructure (19th) and development (11th), so there is plenty of ground to make up on both the US and China. The national AI strategy suggests that some haste will be required if the UK is to even keep these rivals within its sights. Ultimately, though, if all goes to plan, humanity stands to win.