Listen to the teams from IBM Watson, Google DeepMind or Facebook AI Research and you could be forgiven for thinking you’re sitting front and centre in the prequel to Sarah Connor’s nightmares played out in The Terminator series or the beginnings of a journey to an encounter with HAL 9000 of Arthur C. Clarke’s prescient 2001: A Space Odyssey.
Contributors to Wikipedia innocently define cognitive computing as the creation of “hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making”. While today’s tech visionaries will point to the potentially limitless benefits from advancing the human cause, it seems we’re inherently more interested in the darker side of these advances, thanks in no small part to Hollywood.
Questions and answers
Start typing “will robots” into Google and you’ll receive the following auto-complete suggestions: “take over the world”, “replace humans”, “take our jobs”, “take my job” and “replace doctors”. Or try typing “will artificial intelligence” into the ubiquitous search engine and your suggested sentence completions will include “replace programmers”, “replace doctors”, “replace humans” and, disturbingly, “kill us”.
The implications of advancing technology to a point where its applications can mimic, assume or replace the role of people, to a point where humankind is no longer needed to guide such developments, leads to a multitude of questions about what this means for the future of society.
Consider some of the extreme conclusions that our current, early trajectory may lead to. Beyond the long-accepted inevitability that machines can perform highly repetitive manual tasks, such as assembly line manufacturing, faster, cheaper and more accurately than humans, artificial intelligence or AI and machine-learning technologies are already permeating the once sacrosanct knowledge economy.
What if advertising was developed with algorithms, rather than the current creative, albeit to varying degrees data-informed, process? What would that mean for the tens of thousands of people employed in the advertising and creative industry in the UK alone?
What if audits and financial reporting were automated, conducted by advanced, always-learning machines, rather than the teams of accountants and auditors which descend on businesses for independent assessments, currently numbering more than 400,000 people in the UK?
What about wealth, tax and investment planning, legal services, human resources, scientific research and development, health assessment and clinical advice, among others? The list is long, but none of these sectors and professions is immune to the pervasive nature of technology.
If significant job displacement began to permeate wide sections of industry, how would the role of government need to evolve? Would the concept of a welfare state, perennially the target of cuts and reductions, suddenly shift to become the primary government role for a society with 30, 40 or 50 per cent unemployment and diminished opportunity for wealth creation?
These extreme scenarios for the future of work aren’t purely hypothetical. In 2013, researchers at Oxford University estimated that 47 per cent of US jobs could be automated within the following two decades, while this May Apple-supplier Foxconn announced it had “reduced employee strength” by 60,000 workers in just one factory, a more than 50 per cent reduction in the site’s workforce, “thanks to the introduction of robots”.
In a 2016 study, global IT consultancy Infosys reported that 40 per cent of young people across nine developed and emerging economies believe their jobs will be made redundant by technology in the next ten years. Yet, despite being the most concerned about these new working practices, young people in the UK are the least prepared and the least willing to reskill to adapt to a changing workplace.
The young also feel that the British education system is failing to prepare them for the world of work. Barely half of those polled by Infosys said the education they received was helpful for their current role, while 77 per cent said they had to learn new skills not taught at school or university to do their chosen job.
With an expanding global population that’s living longer, it’s in all our interests to worry about the labour market of tomorrow, not just today.
In its study, Toward Solutions for Youth Employment: A 2015 Baseline Report, the World Bank contends that global growth will hinge on today’s youth. Behind this incontrovertible truth lies a worrying finding that of the one billion more young people who will enter the job market in the next decade, only 40 per cent are expected to be able to get jobs that currently exist.
It contends that the global economy will need to create 600 million jobs over the ensuing ten years – five million jobs each month – simply to keep pace with projected youth employment rates. No mean feat, only compounded by the potential for widespread job displacement as a consequence of technological progress.
But does it have to be this way?
Just because technology can do something doesn’t necessarily mean that it will. Just because certain jobs are at risk of automation doesn’t mean that new ones won’t emerge.
The dystopian narrative which accompanies much of the debate about the balance between humanity and technology isn’t a foregone conclusion, it’s simply one potential scenario of many.
“Don’t be afraid of disruption.” So says Alistair Cox, chief executive of global recruiter Hays. “Our experience at Hays has shown us that AI and robotics needn’t be feared or viewed as disruptors to an established way of working, but instead as a natural evolution of the most efficient and modern ways of conducting business.
Just because technology could invalidate the professional contribution of millions or billions of people, doesn’t mean we’ll allow it to, or that it’s in any way inevitable
“The advent of robotics should actually allow businesses to work better and more productively. Robots can be seen as a way of making processes more efficient, resulting in less mechanical, repetitive types of work and boosting overall productivity – a welcome advance given the UK’s dismal productivity levels. Technology taking over more menial tasks means people can focus on the human skills that make them indispensable. I don’t believe that there will be huge volumes of jobs displaced by robotics; these jobs may just be in a different area or need different skills.”
Josh Graff, UK managing director of social network LinkedIn, concurs. “Economies are inefficient. Not everyone makes it to where they can best create value with their specific skillset. We should welcome new ideas that help remove friction from the system and make it easier for people to find careers which are more fulfilling and allow them to lead the lifestyle they want, while pursuing their passions. People are smart, and will decide if new technologies are right for them and the organisations they work for,” he says.
“We should welcome the positive impact that new technology can have in making our lives easier, more productive and more fulfilling. As business leaders, we need to have one eye on the future to make sure we’re upskilling our workforce to successfully adapt to a world of work in which change is accelerating.”
Working hand in hand
So to our opening thesis. Will cognitive computing, artificial intelligence and machine-learning ultimately lead to the decimation of our current social and employment paradigm? Perhaps, but certainly not soon. And maybe not ever. Just because humans can walk backwards, it doesn’t mean we choose to. And just because technology could invalidate the professional contribution of millions or billions of people, doesn’t mean we’ll allow it to, or that it’s in any way inevitable.
All today’s assumptions about the impact of technology on the workforce of today are rooted in the jobs of today. We don’t yet know what new jobs and opportunities will emerge as technology creates labour market capacity and the prospect of further unlocking human potential.
Hays’ Mr Cox concludes: “We have an opportunity here to challenge the status quo of how we work. AI and robotics are merely a channel for us to establish a faster and smarter way of working. Far from signalling the end of the workplace as we know it, we firmly believe it could be a genuinely positive thing for the global labour market. Furthermore, robots have their limitations, namely no creativity, innovation nor leadership. Some jobs are therefore less at risk than others.”
There’s nothing to say that the rise of the machines won’t occur in parallel with the rise of humankind. Let’s focus the conversation on how we can unlock value and potential, not on the risks of destroying it.
CASE STUDY: LUMINANCE AI
Your average merger and acquisition transaction will draw a cadre of newly qualified legal minds into the arduous task of analysing, in the average data room, 34,000 pages of documentation, seeking out inconsistencies, risks or the all-too-frequent problematic clause uncovered the night before the deal is due to be signed.
Luminance AI has ambitions to revolutionise the time-consuming and manual task of documentation analysis in legal due diligence. At first glance this has all the hallmarks of an advance at the expense of lawyers, a fear which Emily Foges, Luminance AI chief executive, agrees is one of the first reactions she receives when speaking to lawyers, but is quick to dispel.
“This is not about a technology product which replaces lawyers, this is about a technology product that enables lawyers to do what they’re good at and helps them to be more effective in what they’re doing,” she says. “It’s about enabling them to spend time being lawyers, rather than wrangling with spreadsheets and wielding highlighter pens over huge stacks of documents.”
This summer, Luminance began a partnership with international law firm Slaughter & May to “train” the Luminance AI technology to go beyond the shortcomings of traditional keyword search, and to understand language in the way that humans do, but more thoroughly, reliably and efficiently.
In a live control study, the firm found that, rather than being unsettled by the effectiveness of AI in the due diligence process, lawyers were instead enthusiastic about how the technology freed them to work on delivering value for their client, but also gave them greater control and visibility of the overall document analysis process.
Beyond the improvement in working practices, there’s also a tangible commercial benefit for both client and law firm – an average 50 per cent reduction in the time to complete document analysis.