How to implement AI assistants

While the technology may promise huge efficiency gains, employers must remain realistic about its scope and adopt it in a strategic, controlled way that won’t end up alienating people

Apr Cover Web
Illustration by Kellie Jerrard

If we’re to believe some of the bleaker headlines about AI, barely anyone’s job is safe from destruction by automation. Apparently, the livelihoods of translators, scientists, mathematicians, writers and even poets are already under threat. No wonder 32% of UK employees think the technology could render their roles redundant, according to a survey published by the Office for National Statistics in Q4 2023.

Yet the same research found that 28% of workers believe AI could make their jobs easier. This is the pitch made by the many technologists who, in contrast to the attention-grabbing doomsayers, contend that generative AI tools such as ChatGPT are much more likely to help workers than replace them. 

They argue that AI-powered digital assistants can serve their human masters, rather than exasperate them like Microsoft Office’s much-mocked Clippy feature used to do. They can remove the drudgery from people’s everyday work, accelerate processes or use data in new ways. 

And so it is that sectors ranging from retail and logistics to marketing and legal are exploring the potential of AI assistance tech. Influencer marketing agency Billion Dollar Boy, for instance, is streamlining its creative process by using Midjourney and ElevenLabs to conceptualise ideas and quickly produce mock-ups and storyboards, reports its global CMO, Becky Owen. 

Meanwhile, international law firm Cleary Gottlieb uses GenAI to scan its databases and create a summary of its lawyers’ relevant experience before meetings with a new client. That document “won’t be ready to send immediately”, says its managing partner, Michael Gerstenzang, “but it’s a pretty darn good first draft.”

Most creatives have, at least once in their careers, experienced a paralysing fear when faced with a blank page to fill, resulting in procrastination and even writer’s block. GenAI-based assistance tech can help to overcome such problems, according to Stack Overflow, a coding knowledge hub that recently signed up with Google to power the search giant’s Gemini AI model. 

“We believe it will be a lot easier to write code than it was yesteryear. Back when I started coding, I was doing it the handwritten way and the only reference points were textbooks. For that first draft, GenAI will be able to generate all this foundational content,” explains Stack Overflow’s CEO, Prashanth Chandrasekar. 

Do you really need AI? 

Companies that implement a new AI tool without carefully assessing its possible cultural and practical side effects beforehand risk alienating employees or even making their lives more difficult. A firm should therefore identify whether the tech they’re interested in is likely to be a net benefit or not – and a simple test can go a long way here, according to Peter van der Putten, assistant professor of AI at Leiden University. 

He would advise any business facing this choice to consider the following questions: “Will this tool automate manual work at sufficient levels of quality? Will it improve life for customers and employees? Will it lead to better business outcomes?”

If a company can answer all these positively, it should then devise appropriate use cases with which to experiment. It would need to set up pilot and control groups to measure the tool’s impact, just as it would with any new IT, stresses van der Putten, who is also director of the AI lab at US software firm Pega. 

While it may be tempting to pick a low-value, low-stakes use case to test, he recommends choosing an application that allows for a “quick measurement of success” and could also make a big impact once scaled up. 

Owen notes that such trials naturally involve a certain amount of error. With this in mind, Billion Dollar Boy has established a set of guiding principles designed to ensure that it can gain from AI in a way that benefits staff and clients. The agency has also created a task force to seek out new AI tools and uses for them. This group will brief the rest of the business on the latest advances in the field.

“There’s been a surge in AI-integrated tools, each promising efficiencies, but these can be clunky and add time to work processes,” Owen says. “The truth is that we might not all immediately land on the right solution. The key is to be open-minded.”

Where AI assistance pilots can ‘hit a wall’ 

For all the enthusiasm about GenAI, there are several pitfalls that firms seeking to implement it must avoid. 

Chandrasekar recounts a meeting he had with 15 CIOs in the banking sector, who had all been keen to realise the huge productivity gains promised by GenAI. Three months later, these IT chiefs “hit a wall” when trust issues concerning data privacy and security arose following the pilots. 

The CIOs were worried that the data they had been putting into the tools would “make its way, literally, into their competitors’ banks”, Chandrasekar says. 

Given what’s at stake, ownership becomes a “hot potato”, he adds. “You’re betting your career that this is going to work when you’re fairly early in the hype cycle.”

AI tools need to address the credibility problem by adding context such as citations to reassure the user that their output hasn’t been poisoned by hallucinations. That’s the view of Cassiano Surek, CTO at digital design agency Beyond. 

“Given the data-heavy nature of AI assistance, ensuring that relevant, high-quality information is available will be key to its effective use, as inaccuracies can quickly erode trust,” he says. “AI assistants must be able to cite their sources and have virtually zero hallucinations for such a business-critical usage.” 

Data privacy assurances were vital for fashion etailer Asos when it rolled out an AI-powered code-completion tool in September 2023 after a successful pilot at the start of the year. The firm had used 90 employees – a large enough group to provide useful feedback on potential problems – to test GitHub Copilot before making it available to all tech staff. 

Dylan Morley, lead principal engineer at Asos, reports that measuring the impact of such tools is a topic of “great discussion” across the industry. But he adds that this is a more complex matter than simply adopting a tool and waiting for, say, a 10% efficiency gain. 

“You can instinctively feel that Copilot is faster to solve certain scenarios, but there’s a broader question about efficiency in tech. We’re efficient when we’re making progress towards strategic goals and delivering value to customers.”

Morley argues that firms could be focusing on areas other than adopting AI tools if improving efficiency is indeed their main goal. 

“Teams deliver software, so team efficiency is the important thing we care about,” he stresses. “Managing time spent in meetings, reducing context-switching, improving build-and-deploy pipelines – all these things can have a much larger overall impact. You can be incredibly productive where there’s a well-curated set of priorities and a tight feedback loop, and when you know exactly why you’re building something and avoid distraction.” 

While AI tools clearly do have a role to play, it’s important to manage expectations about the extent to which they will help. 

Morley believes they can save people time that they probably would have spent on busy work, but he adds a caveat: “What you do with that saved time – how you reinvest it to ensure that you’re realising productivity gains – is the important point.”

Three essential AI considerations

Expand Close