Could the pandemic have been predicted?

Governing in advance may seem like something from science fiction, but by using artificial intelligence and predictive analytics, experts say it’s possible

When the coronavirus pandemic hit UK businesses in the spring, forcing organisations to lock down, it required open minds to grasp technology and reimagine ways of working. Government and the public sector sought to solve challenges old and new, including rushing through essential financial support to companies and their furloughed staff, and improve service delivery and data-driven decision-making by dialling up investment in tech, especially artificial intelligence (AI).

After all, with predictive analytics, governments can conceivably prevent, rather than cure, issues or respond to citizens’ needs before they arise. But how far off are we from governing in advance? And what are the ethical implications of such a system?

Around the world, there are numerous narrow-scope use cases of authorities using predictive analytics to life-saving and life-enhancing effect. In Durham, North Carolina, the police department reported a 39 per cent drop in violent crime from 2007 to 2014 after using AI to observe patterns and interrelations in criminal activities and to identify hotspots, thus enabling quicker interventions.

Also in the United States, AI has helped reduce human trafficking by locating and rescuing thousands of victims. Knowing that approximately 75 per cent of child trafficking involves online advertisements, the Defense Advanced Research Projects Agency developed a platform using software that monitors suspicious online ads, detects code words, and infers connections between them and trafficking rings.

Further afield, the Indonesian government has partnered with a local tech startup to better predict natural disasters. By analysing historical flood data, collected from sensors, and accessing citizen-complaint data, prone areas can now be quickly identified, speeding up the emergency response and improving management.

Actionable intelligence and data scientists needed

In the UK, the public sector has much work to do, and requires people to do it, if governing in advance is to become a reality, says David Shrier, adviser to the European Parliament in the Centre for AI. “More investment in predictive analytics will help with risk mitigation, although this exacerbates the already extant shortage of data scientists who can develop and manage these models.”

Predicting trends through data analysis is vital for governments and has been for some time. “Forecasting approaches using historical data to build mathematical predictive models have been core to government economic policy for decades,” says Andrew Hood, chief executive of Edinburgh-headquartered analytics consultancy Lynchpin. “Whether those models allow governments to govern in advance effectively depends on to what extent they have enough political motivation and capital to apply the model outputs directly.

It’s too tempting to see predictive analytics as a magical answer, a black box that can solve all our challenges

“Arguably, there has been no shortage of predictive models kicking around as the pandemic took hold. However, the pandemic also points to the reality of a lot of prediction and forecasting: it is not about having one crystal ball to rely on, rather a set of predictions based on the best data to hand that need to be reviewed constantly, updated and critically applied.”

Hood stresses that skilled humans must remain in the driving seat and warns of the dangers of solely relying on technology to steer choices. “As with any application of predictive analytics,” he says, “it is the integration of those models within the context of governing and the processes of human decision-making that is the critical success factor.”

Public trust in AI must be won

Futurist Tom Cheesewright, whose job is to predict trends, posits that predictive analytics is “one subset of a wider array of foresight tools for scanning near and far horizons”. Should governments be making better use of such tools? “Absolutely,” he answers. “But I think it’s too tempting with predictive analytics to see this as a magical answer, a black box that can solve all our challenges. It’s not like Minority Report-style predictive justice. It’s about pulling policy levers in time to dodge obstacles or maximise opportunities.”

Echoing Hood’s advice, Cheesewright adds: “Foresight needs time and investment of cash and political capital, both of which are in short supply in our volatile, post-austerity era.”

Nick McQuire, of specialist technology market intelligence and advisory firm CCS Insight, says: “Historically, the public sector has been behind most sectors in terms of maturity in deploying and investing in AI,” but senses the purse strings are being loosened. “We are starting to see more AI applications in the public sector: chatbots, contact centre assistance and demand forecasting,” says the senior vice president and head of enterprise research.

AI has been excoriated in the UK media this year, though, making citizens and politicians wary of the tech and by extension predictive analytics. “Public confidence in AI is not high,” McQuire concedes. “To build trust in AI, organisations are now having to double-down on areas like data governance and security, privacy, explainability and ethics.”

It didn’t help that prime minister Boris Johnson, the most powerful politician in the UK, blamed the Ofqual exam-marking fiasco in August on “a mutant algorithm”, says Dr Jeni Tennison, vice president and chief strategy adviser at the Open Data Institute. “We have to recognise people are at the heart of designing algorithms; it’s not that algorithms go off and mutate on their own and we have no control over them,” she says. “We need to ensure there is a good end-to-end process that recognises the AI isn’t always going to get things right.”

Tennison, a fervent supporter of open data, believes those in the public sector must take care of how they deploy the technology. And, as such, predictive analytics, if applied, should be closely managed. “Algorithms that are used by the public sector have a much bigger impact on people’s lives. Government has a particular responsibility to make sure it uses AI and data well,” she says.

“Right now we’re operating from a position where people distrust the use of algorithms. The public sector has to be very proactive and win that trust.”

Given the public scepticism around AI, and the paucity of data scientists to make best use of predictive analytics, it seems we are some way off the UK governing in advance. Ethically, perhaps that is no bad thing.