Stanford researcher on the AI skills gap and the dangers of exponential innovation

ChatGPT and its ilk represent a welcome quantum leap for productivity, according to eminent AI expert Professor Erik Brynjolfsson. But he adds that such rapid developments also present a material risk

Erik Brynjolfsson

Erik Brynjolfsson is in great demand. The US professor whose research focuses on the relationship between digital tech and human productivity is nearing the end of a European speaking tour that’s lasted nearly a month. Despite this, he’s showing no signs of fatigue – quite the opposite, in fact. 

Speaking via Zoom as he prepares for his imminent lecture in Oxford, the director of the Digital Economy Lab at the Stanford Institute for Human-Centered AI is enthused by recent “seminal breakthroughs” in the field.

Brynjolfsson’s tour – which has included appearances at the World Economic Forum in Davos and the Institute for the Future of Work in London – is neatly timed, because the recent arrival of ChatGPT on the scene has been capturing human minds, if not yet hearts. 

The large-scale language model, fed 300 billion words by developer OpenAI, caused a sensation with its powerful capabilities, attracting 1 million users within five days of its release in late November 2022. At the end of January, Microsoft’s announcement of a substantial investment in OpenAI “to accelerate AI breakthroughs” generated yet more headlines. 

ChatGPT’s popularity is likely to trigger an avalanche of similarly extraordinary AI tools, Brynjolfsson predicts, with a possible economic value extending to “trillions of dollars”. But he adds that proper safeguards and a better understanding of how AI can augment – not replace – jobs are urgently required.

What’s next in AI?

“There have been some amazing, seminal breakthroughs in AI lately that are advancing the frontier rapidly,” Brynjolfsson says. “Everyone’s playing with ChatGPT, but this is just part of a larger class of ‘foundation models’ that is becoming very important.”

He points to the image generator DALL-E (another OpenAI creation) and lists similar tools designed for music, coding and more. Such advances are comparable to that of deep learning, which enabled significant leaps in object recognition a decade ago. 

“There’s been a quantum improvement in the past couple of years as these foundational models have been introduced more widely. And this is just the first wave,” Brynjolfsson says. “The folks working on them tell me that there’s far more in the pipeline that we’ll be hearing about in the coming weeks.”

As much as I’m blown away by these technologies, the bottleneck is our human response

When pushed for examples of advances that could shape the future of work, he reveals that Generative Pre-trained Transformer 3 (GPT-3) – the language model that uses deep learning to emulate human writing – will be superseded by GPT-4 “within weeks. This is a ‘phased change of improvement’ compared with the last one, but it’ll be even more capable of solving all sorts of problems.” 

Elsewhere, great strides are being made with “multi-agent systems” designed to enable more effective interactions between AI and humans. In effect, AI tech will gain the social skills required to cooperate and negotiate with other systems and their users. 

“This development is opening up a whole space of new capabilities,” Brynjolfsson declares.

The widening AI skills gap

As thrilling as these pioneering tools may sound, the seemingly exponential rate of innovation presents some dangers, he warns. 

“AI is no longer a laboratory curiosity or something you see in sci-fi movies,” Brynjolfsson says. “It can benefit almost every company. But governments and other organisations haven’t been keeping up with developments – and our skills haven’t either. The gap between our capabilities and what the technology enables and demands has widened. I think that gap will be where most of the big challenges – and opportunities – for society lie over the next decade or so.”

Brynjolfsson, who studied applied maths and decision sciences at Harvard in the 1980s, started in his role at Stanford in July 2020 with the express aim of tackling some of these challenges. 

“We created the Digital Economy Lab because, as much as I’m blown away by these technologies, the bottleneck is our human response,” he says. “What will we do about the economy, jobs and ethics? How will we transform organisations that aren’t changing nearly fast enough? I want to speed up our response.”

Brynjolfsson spoke passionately about this subject at Davos in a session entitled “AI and white-collar jobs”. In it, he advised companies to adopt technology in a controlled manner. Offering a historical analogy, he pointed out that, when electricity infrastructure became available about a century ago, it took at least three decades for most firms to fully realise the productivity gain it offered because they first needed to revamp their workplaces to make the best use of it. 

“We’re in a similar period with AI,” Brynjolfsson told delegates. “What AI is doing is affecting job quality and how we do the work. So we must address to what extent we keep humans in the loop rather than focus on driving down wages.”

Why AI will create winners and losers 

The risk of technology racing too far ahead of humanity for comfort is a familiar topic for Brynjolfsson. In both Race Against the Machine (2011) and The Second Machine Age (2014), he and his co-author, MIT scientist Andrew McAfee, called for greater efforts to update organisations, processes and skills. 

AI can benefit almost every company. But governments and other organisations haven’t been keeping up with developments – and our skills haven’t either

How would he assess the current situation? “When we wrote those books, we were optimistic about the pace of technological change and pessimistic about our ability to adapt,” Brynjolfsson says. “It turns out that we weren’t optimistic enough about the technology or pessimistic enough about our institutions and skills.”

In fact, the surprising acceleration of AI means that the “timeline for when we’ll have artificial general intelligence” should be shortened by decades, he argues. “AGI will be able to do most of the things that humans can. Some predicted that this would be achieved by the 2060s, but now people are talking about the 2030s or even earlier.”

Given the breakneck speed of developments, how many occupations are at risk of obsolescence through automation? 

Brynjolfsson concedes that the range of roles affected is looking “much broader than earlier thought. There will be winners and losers. Jobs will be enhanced in many cases, but some will be eliminated. Routine work will become increasingly automated – and there will also be a flourishing of fantastic creativity. If we use these tools correctly, there will be positive disruption. If we don’t, inequality could deepen, further concentrating wealth and political power.” 

How to apply AI in the workplace

How, then, should businesses integrate AI into their operations? First, they must avoid what Brynjolfsson has labelled the Turing trap

“One of the biggest misconceptions about AI – especially among AI researchers, by the way – is that it needs to do everything that humans do and replace them to be effective,” he explains, arguing that the famous test for machine intelligence, proposed by Alan Turing in 1950, is “an inspiring but misguided vision”.

Brynjolfsson contends that a “mindset shift” at all levels – from scientists and policy-makers to employers and workers – is required to harness AI’s power to shape society for good. “We should ask: ‘What do we want these powerful tools for? And how can we use them to achieve our goals?’ The tools don’t decide; we decide.”

One of the biggest misconceptions about AI is that it needs to do everything that humans do and replace them

He adds that many business leaders have the wrong attitude to applying new tech in general and AI in particular. This amounts to a “pernicious problem”. 

To illustrate this, he cites Waymo’s experiments with self-driving vehicles: “These work 99.9% of the time, but there is a human safety driver overseeing the system and a second safety driver in case the first one falls asleep. People watching each other is not the right path to driverless cars.”

Brynjolfsson commends an alternative route, which has been taken by the Toyota Research Institute, among others. When he was in Davos, the institute’s CEO, Dr Gill Pratt “told me how his team has flipped things around so that the autonomous system is used as the guardian angel. Creating a self-driving car that works in all possible conditions is tough, but humans can handle those exceptions.” 

With a person making most decisions in the driving seat, the AI intervenes “occasionally – for instance, when there’s a looming accident. I think this is a good model, not only for self-driving cars, but for many other applications where humans and machines work together.” 

For similar reasons, Brynjolfsson lauds Cresta, a provider of AI systems for customer contact centres. Its products keep humans “at the forefront” of operations instead of chatbots, whose apparent Turing test failures continue to frustrate most people who deal with them. 

“The AI gives them suggestions about what to mention to customers,” he says. “This system does dramatically better in terms of both productivity and customer satisfaction. It closes the skills gap too.”

Does Brynjolfsson have a final message for business leaders before he heads off to give his next lecture? “We need to catch up and keep control of these technologies,” he says. “If we do that, I think the next 10 years will be the best decade we’ve ever had on this planet.”