A fridge that knows the use-by date of food and appliances which switch off to economise on electricity are just the start of an internet-like network of machines increasingly entrusted to make our decisions, writes Tom Brewster
Understanding the societal impact of computing and testing the possibilities of machines with human-like intelligence have always been passions of Sir Nigel Shadbolt. Yet he recognises the threats concomitant with trusting too much in code, even if they seem like science fiction to some.
It’s only been 12 months since Sir Nigel was knighted for his services to science and engineering, but his work stretches back 30 years. Over that time, through his psychology and computer science research, he’s seen the inexorable spread of the internet as a force for change and is now keeping a watchful eye on the so-called internet of things.
This will see the spread of connected, automated devices, largely operating on their own, supposedly for the benefit of the general public. They will be powered by cloud-based systems, again consisting of collections of highly-automated machines, spread across global data centres, with the ability to deal with massive fluxes of traffic – something the growing pool of connected things is expected to deliver. For the average home-user, this means being able to let computers decide how best to manage their energy use to save money or having their fridge send alerts when groceries have reached their use-by date.
But Sir Nigel believes the most successful internet of things projects will initially benefit the emergency services and urban planning groups, as they can take advantage of open data streams. He has been impressed by one initiative using a variety of information sources to place ambulances as close to likely incidents as possible and expects cities to get greener with more efficient energy usage thanks to automated controls.
In a bid to further the benefits of the web for the common man, Sir Nigel and his team at the University of Southampton, where he is a professor of artificial intelligence (AI) and head of the Web and Internet Science Group, are working on the study and practice of social machines (SOCIAM). The project will determine how to develop distributed, crowd-powered systems that have the potential for profound impact on individuals, businesses and governments. “We want to make that a routine way in which business is done,” he says.
Fundamentally we have to ask at every point where we’re delegating decision-making authority
He also founded the Open Data Institute (ODI) with the forefather of the world wide web, Sir Tim Berners-Lee. The purpose of the ODI is to encourage government and businesses to open up sources of data for the public good. Yet Sir Nigel believes this idea of openness needs promoting across other areas to ensure the internet continues to bring benefits to a wide audience, whether via web browsers, the cloud or connected “things”.
In particular, in a “post-Snowden world” and one in which a handful of companies have massive power over the way the web works, he worries about excessive control over the internet. He frets about “intrusive and exclusive control by any agency”, whether a state agency, such as the US National Security Agency, or an organisation on the scale of Facebook and Google. Despite the intrusions on privacy such entities have brought, Sir Nigel is still hopeful. “The thing that depresses me is when people just sit on their hands and say privacy is dead, get over it. It’s entirely in the hands of our society,” he says.
His answer is to build accountability into the internet, by having tracking working for the average user, rather than against them. “We’ve got more computing power than ever; some of it should be devoted to this issue of tracking for our benefit,” he adds. “The way you can do that is doing what’s called ‘accountable computing’, where there’s a trace associated with the flow of data in these systems about where it’s been, who has had access to it.”
Those building the architecture of the internet also need to be wary of granting machines too much decision-making power through AI. “You have to keep asking yourself, if we keep granting autonomy to these systems to take decisions on our behalf, do we understand the full range of their responses and the side-effects that might have?
“Fundamentally we have to ask at every point where we’re delegating decision-making authority, do we know how to take it back and do we understand the limits of that authority? That’s really crucial,” he says.
While cloud systems have seen failures, for example when Google’s Gmail goes down or Amazon Web Services hosting collapses causing websites to go dark, they still work most of the time. As long as machines are coded responsibly, these systems will continue to operate adequately, says Sir Nigel, and the same goes for other, more contentious technologies, such as weaponised military drones. “You have to put those rules of subservience into the fundamental software systems,” he says.
Sometimes the code giving machines their instructions does get out of control, so much so that humans cease to understand how they work. “We’ve got this very interesting area of AI called genetic algorithms where you essentially evolve programs,” Sir Nigel says. “Those programs can do things that you stare at as a designer for hours and hours to work out how it’s doing what it’s doing.
“There’s a very good example in electronic design where they had a program to design oscillators and amplifiers, simple electronic circuits. They found some of these designs that the genetic algorithms had evolved and nobody could make any sense out of them. The system had learnt to take advantage of really peculiar impurities and facets of the hardware and the materials that you would never design for as a human designer. It’s fascinating, but it’s kind of spooky.”
Yet fears of the fictional Skynet, of a world in which machines have taken over, are far-fetched. It should be remembered humans often make fatal mistakes. In many cases, we should trust machines more than an individual with free will and capacity for error, Sir Nigel says.
He concludes: “What we do know is that, in lots of routine kinds of automation, the error rates are much less than when you’ve got human operators there. That’s just a sad fact. People make mistakes more often than our machines do.”