Learning to live with robots

It is hard to think of the words “artificial intelligence” without conjuring up Doomsday images of The Matrix and The Terminator where man and highly intelligent machine are pitched into battle. Even a step further back from that science-fiction precipice conflates the term with massive job losses and the eventual irrelevance – or liberation – of humankind from labour as we know it.

Artificial intelligence or AI is, of course, all around us already in obvious ways – Apple’s voice recognition service Siri or Google’s increasingly reliable search results – or in more obscure ones such as better weather forecasting and lower levels of spam e-mail in your inbox.

Success will be measured in return on investment, new market opportunities, diseases cured and lives saved

There is nothing new about the concept of AI which started to gain traction in the 1950s when Alan Turing explored the notion of machines that could think. J.C.R. Licklider’s paper Man-Computer Symbiosis from 1960 may have sounded like something penned by sci-fi writer Philip K. Dick, but was instead a formative paper on how the world would move beyond programmable computers to one where computers “facilitate formative thinking”.

Welcome to the “cognitive era”

This second age of machine-learning is what IBM now calls the “cognitive era” which began when Watson, IBM’s Big Blue cognitive computing system, won the US game show Jeopardy! Yet even then, companies like the UK’s Autonomy had been applying specialist algorithms to vast swathes of unstructured data for a plethora of purposes.

The world has been moving in the direction of cognitive computing for years – think autonomous cars, fraud detection systems in banks and complex trading systems that can act faster than human traders – so it is no surprise some companies have started to appoint chief AI officers with one eye clearly on the future.

Driverless car Milton Keynes

Driver-less cars in Milton Keynes are just one example of cognitive computing already in use

Dr John Kelly III, senior vice president IBM Research and Solutions, says: “The success of cognitive computing will not be measured by Turing tests or a computer’s ability to mimic humans. It will be measured in more practical ways, like return on investment, new market opportunities, diseases cured and lives saved.”

The reason that AI has become a mainstream issue is partly the result of investment in so-called big data technologies. Gartner, the research company, has calculated that the world’s information is set to grow by a colossal 800 per cent over the next five years. That will have the statisticians salivating until you factor in the point that 80 per cent of that morass of data will be in an unstructured form – e-mails, images, sounds – all of which can be analysed, but not by programming a computer.

“This data represents the most abundant, valuable and complex raw material in the world. And until now, we have not had the means to mine it,” says Mr Kelly. And he believes it will be genomic companies looking for better ways to tackle cancer or oil and gas companies looking to improve the accuracy of exploratory drilling or billion-dollar businesses looking to speed up relations with their thousands of suppliers that will drive this revolution.

“In the end, all technology revolutions are propelled not just by discovery, but also by business and societal need. We pursue these new possibilities not because we can, but because we must,” he says.

Realising the potential of AI

It has not gone unnoticed that all stripes of business are investing in AI and London appears to be thriving. Google may have paid £400 million for the UK’s DeepMind – a company so complicated that it struggled to elucidate what Google wanted it for – but smaller investments are working their way down the chain.

Imperial Innovations, the university fund, has pumped £1.5 million into a robotic startup called Telectic, while MasterCard has thrown its weight behind Rainbird, a startup that believes it can deepen a relationship with the customers of banks, insurers, retailers and music producers by better understanding human behaviour. Meanwhile Stratified Medical, a startup using AI to improve insight into the development of new pharmaceuticals, has just appointed Professor Jackie Hunter, previously head of the Biotechnology and Biological Sciences Research Council, as its new chief executive.

One of the biggest issues in any debate about AI is, of course, jobs. If cognitive computing takes off, then many of us may find a robot sitting in our desk. Consultants McKinsey forecast that just as word processors reduced the need for typists, many knowledge-based jobs could soon become obsolete. This creates challenges for employers looking to invest in transformative AI and will require “careful communication and change management”, says the consultancy.

Yet the power of human workers could be augmented by the rise of the robots. According to McKinsey, who calculate the economic impact of AI could be as much as $6.7 trillion by 2025: “Knowledge-work jobs generally consist of a range of tasks, so automating one activity may not make an entire position unnecessary.”

It could also perversely prove to be a boom to sectors such as manufacturing as it would reduce the need for low-cost labour and outsourcing, and result in more advanced work returning to countries like the UK.

As with any technological revolution, there will be risk and reward. Ray Kurzweil, director of engineering at Google AI and a leading thinker on the subject, concludes: “Fire kept us warm and cooked our food, but also burnt down our houses. Every technology has had its promise and peril.”