Although generative AI is not new as a concept, it is only since late 2022 when ChatGPT launched, that the technology has hit the mainstream. The sophistication and accessibility of ChatGPT and its successor GPT-4, as well as that of DALL-E (which is for images) and Google’s Bard, have created both excitement and fear around its potential uses – and misuses.
Some commentators believe that generative AI will revolutionise work and society in ways that we’re unprepared for or cannot yet imagine; some predictions even verge on apocalyptic. Others recognise the enormous potential of generative AI but believe that its development and impact on business will follow a pattern similar to all other transformational technologies: there will be a period of hype during which the possibilities are thought to be endless, but the hype will subside and people will become more realistic about the potential and the limitations of the technology.
Generative AI in its current form – impressively sophisticated and accessible to all – is only in its infancy and its full potential is still not fully understood. But setting expectations for its impact and planning accordingly is something that all business leaders should be thinking about. Will generative AI fundamentally change business operations and workforce requirements? Two experts have their say.
Generative AI has great potential, but it is ultimately just another tool in the toolbox
Generative AI has been overhyped, but I’m not sure that it’s been more overhyped than any other technology. The Gartner hype cycle is revealing. You see so many technologies go through that hype curve: a piece of tech is introduced, there’s minimal interest at first, then it catches on and people start to think it’s going to solve all of their problems. But eventually the hype cools and the technology is regarded as just another useful tool.
In this way almost all technology with transformational potential will be overhyped. So is generative AI more overhyped than any other technology relative to its potential? I’m not sure.
I do think there’s a greater sense, especially among the less technical people in organisations, that generative AI will be some kind of ‘magic-bullet’ solution. By this I mean people have the tendency to think this is a tool that can solve all their problems relating to efficiency and they won’t need as many people. For generative AI, this thinking likely stems from the fact that the technology is accessible to almost everyone and that many of the analogies we use to talk about it – that computers are ‘thinking’ or ‘learning’ – create an element of science fiction.
In reality, generative AI has not been, and is unlikely to become, a magic-bullet solution. For another example of the hype cycle, consider how the market for cloud services developed: at one time people thought that storing all of your data in the cloud meant that you wouldn’t need to have backups and you wouldn’t have to employ infrastructure engineers. But it just hasn’t happened that way. You do still need backups and you do still need infrastructure engineers; it’s just that rather than doing cabling they’re now doing infrastructure as code.
With any major technological advance, stretching further back than even the cotton loom, there is often the worry that huge swathes of the workforce will be replaced, but they never are. People don’t become obsolete; skills become obsolete. With the cotton loom, with the internet, with the cloud, we have never needed fewer people. Instead, we need people with different skills to do jobs that didn’t exist before.
This is not to say that generative AI doesn’t have huge potential and that it won’t have a profound impact on businesses. The cloud was overhyped in its early days, but it has been incredibly beneficial. I would never go back to cabling and installing physical hardware. Similarly, generative AI will undoubtedly have major impacts on organisations and workforces. But it is not a magic bullet. It won’t replace workforces en masse and it won’t be the final destination in technological advancement. There will be a ‘next thing’ and generative AI will become just another tool in the toolbox.
In the long term, the impact of generative AI on business and society is understated
Although generative AI has been overhyped in the near term, the long-term potential of the technology within the business landscape and the impact it will have on organisations is massively understated.
Every major industry will be faced with significant changes to both business operations and the workforce because of generative AI. And while the uses we’ve seen so far are primarily early examples of experimentation, firms in areas such as retail, financial services, oil and gas, consumer goods and healthcare are finding more sophisticated applications.
It’s here that data quality, governance and effective AI guardrails will need to be further developed. Many organisations and tech leaders are unprepared for this challenge.
Leaders need guidance on how best to experiment, pilot and adopt generative AI ethically, and ensure the quality and privacy of data. Our annual global study of 600 data leaders found that two in five generative AI adopters cite data quality as the main obstacle to adoption, followed by data privacy and governance, and AI ethics.
Before unleashing AI-powered innovation and making it easily accessible to such a variety of public and private sector decision-makers (available to everyone, really), leaders need to tackle uncertainty around how to harness its power. This will include answering questions about how to structure input data, ensure processes are in place to protect the data being stored and processed, and avoid undesirable AI-generated outputs that could lead to disastrous consequences.
The first step of strong AI governance is to ensure you can trust the data powering the models. Put it another way, generative AI has brought data accuracy to a tipping point.
Take an example from the healthcare industry. To ensure the veracity of AI outputs in the context of patient treatment and monitoring, we need trustworthy and accurate patient information. Just one mistake in managing that data can have dangerous consequences. Data needs to be accurate, complete, consistent, timely, valid and unique long before it can be used to power a generative AI application.
Concerns around adapting processes are one thing, but the real risk is in using AI without the proper safeguards in place. Most organisations are unprepared for these risks and will face massive challenges if they don’t act quickly. The smartest companies will make sure their data is ready for generative AI before jumping in. They’ll double down on AI governance, ethics and data accuracy, embedding these into AI systems from the start.