Pitfalls to avoid when using AI for the first time
Getting the integration process wrong can prove disastrous. Any business seeking to introduce the technology would be well advised to learn from the early adopters
Artificial intelligence has long been heralded as the technology of the future – although it is still the future if an international survey of 700 business leaders by Juniper Networks in April is anything to go by. Only 6% of respondents reported having adopted AI, but 95% said that their firms would benefit from doing so.
A similarly sized survey of IT decision-makers by Insider in the same month found that a third of respondents were planning investments in AI.
While many businesses are clearly keen to start using the technology, experts warn that they need to introduce it judiciously. Firms may well have more pitfalls to avoid than benefits to reap, so it’s vital to learn from previous AI integrations elsewhere.
BT has been using various generations of the technology for some time, according to Paul O’Brien, director of AI, service, security and operations research. Today, the company utilises tech ranging from neural networks and deep learning to evolutionary computing and heuristics in the effort to streamline its operations.
“AI improves the way the company manages its networks and services,” O’Brien says. “The technology automates routine tasks and augments people’s capabilities with smart insights and support.”
AI plays a role in many of BT’s activities, from planning where next to install its fibre broadband network to handling the 27,000-strong team of field engineers and their vehicles. It helps managers to predict line fault volumes, organise rosters and schedule work. It’s even being used by technicians to construct a digital twin (a virtual simulation) of the national phone network
No silver bullet
Despite his company’s successful applications of the technology, O’Brien warns prospective AI adopters to integrate it into their existing systems with great care.
“There is too much hype around AI, which raises expectations and leads to misunderstandings of what it can do,” he says, adding that its performance can depend heavily on the quality of the data fed into it.
Dr Catriona Wolfenden, partner and innovation manager at law firm Weightmans, agrees. “Many people fall into the trap of thinking that AI is some kind of magic wand that’s simply going to fix all ills. It’s not at all,” she says. “You need to ensure that you’re using AI on the right kind of thing and you must get the underlying principles and data collection right.”
The limitations of AI and the built-in biases that have dogged certain systems have attracted plenty of negative headlines. This risk factor has encouraged firms such as EY to employ AI ethics experts in senior positions as a mitigation measure. Ensuring that the technology is used ethically is vital. But, while 87% of respondents to the Juniper Networks survey agreed that their firms needed to put proper governance policies in place to minimise any harm resulting from the use of AI, this task ranked as one of their lowest priorities in the adoption process.
“Companies need domain expertise on board, so that they can understand how to both exploit AI and understand its limitations,” O’Brien advises.
Start AI on the mundane work
Wolfenden has been careful to integrate AI into Weightmans’ work gradually. “We’ve taken a very conscious approach that it’s about the augmentation of a lawyer’s skill,” she says. “It’s there to enhance the professional’s expertise, not to replace them.”
AI was first applied in a chatbot for internal use. “We started really small to survey the market and pick a use case,” Wolfenden recalls.
Once that application had proved its worth, she applied the tech to another straightforward task: pulling data from one set of files and inserting it into another.
“It’s really easy at the start to get carried away and think you’re going to do everything with AI,” she warns. “Have the idea and scale it back – probably twentyfold – to begin in the right place.”
The firm was also prudent in how it presented the technology to its clients, knowing that they might worry that their highly paid solicitor was being replaced by a machine.
“A lot of it is down to careful messaging, both internally and externally,” Wolfenden says. “We provide fact sheets that explain why we would use AI, stressing that humans are still involved.”
Cary Cooper, professor of organisational psychology and health at Manchester Business School, advises firms introducing AI to “engage the workers with the process, rather than impose it. Getting them to come up with the solution so that it works for both them and the business is the most effective strategy.”
Simply foisting AI on your staff is a sure-fire way to trigger resentment, Cooper warns. “That will create uncertainty and insecurity, motivating them to find ways to make it less effective,” he says. “On the other hand, if employees have ownership in the introduction of AI, they won’t try to undermine it. They will make it work and it’s likely to prove more productive.”