When do we trust AI’s advice?

If you need recommendations or advice, who are you likely to turn to? New research suggests artificial intelligence can help, even in cases where most of us normally prefer a human response
When do we trust AI to suggest something to us, and would we ever take a robot’s recommendation over a human’s?

As more of our everyday life, from shopping and dating, to learning and exercise, takes place digitally, there’s an opportunity for artificial intelligence (AI) to serve large online audiences and create business efficiencies.

IDC analysts forecast worldwide spending on AI will double to $110 billion in 2024, while data from digital assistant company Amelia reveals 88 per cent of US organisations have scaled up their use of AI since the pandemic began. But have we reached a tipping point where consumers trust an AI recommendation system more than a human?

Not yet, according to new research published in the Journal of Marketing, based on data from more than 3,000 people who took part in ten experiments. When it comes to AI and trust, the key factor is whether consumers are assessing the practical aspects of a product – its utilitarian value – or its experiential, sensory aspects – its hedonic value.

“When people are looking for things that have to do with practicality, functionality, decisions that are more cognitively driven, that’s where they tip over to AI,” says Dr Chiara Longoni, co-author of the study and assistant professor of marketing at Boston University’s Questrom School of Business. “When it’s a question of anything sensory related, a human is usually perceived as best.”

Yet these “lay beliefs” don’t “fully correspond to the facts” about the competency of both human and AI recommendation systems, Longoni adds.

And as Dr Luca Cian, fellow co-author and assistant professor of marketing at the University of Virginia’s Darden School of Business, elaborates: “It’s not that humans, in reality, are always better at making recommendations when it’s something sensory related. And computers in reality aren’t always better when it’s something utilitarian.

“Humans can be as good as computers in establishing something utilitarian. And there are many times when AI is good at making decisions that are sensory related. For example, spice and drinks companies use algorithms to create new flavours and they work well.”

Levels of trust linked to specific products

Human biases do mean AI recommendation systems lend themselves more to certain sectors, says tech entrepreneur Emma Smith, founder and chief executive of Envolve Tech, which has created a virtual shopping assistant used by brands including We Buy Any Car.com and BHS, now an online-only retailer.

In mass-market retail verticals with huge product variety, such as fashion, cosmetics or gardening, Envolve Tech’s AI performs well in areas where people don’t want to speak to a human, such as an online condom retailer. Meanwhile, the same AI on a medical device retailer’s site has been less successful.

“When shoppers need an exact answer for a complicated situation, humans still come out on top, at least for now,” Smith notes.

Two more important distinguishing factors between a human and AI recommendation system are the vast amounts of data AI can process and being free of personal biases, she says.

“Even the best human customer service agent can only possibly stay on top of a fraction of the information AI systems can, which means human product recommendations are always based on a smaller dataset,” says Smith.

“A human agent will also bring their own personal biases in. For highly bespoke, artisanal purchases this can be desirable, but for most purchases it’s better to have a more objective recommendation.”

Using AI effectively in the real world

So if consumers’ perception of AI isn’t reflective of its actual recommendation abilities, with our beliefs ingrained by portrayals of robots in popular culture, what does this mean for businesses looking to leverage it?

Longoni and Cian recommend a hybrid approach, over a potentially “creepy” or misleading overt humanisation of AI recommendation systems.

“People are more amenable to AI in cases where there’s a human component. It doesn’t make people prefer AI to a human, it simply equalises the preference for human or AI advice,” says Longoni.

Mishandled uses of AI have become urban legend, from Target’s faux pas of outing a teenage girl’s pregnancy, to Amazon’s same-day shipping pricing calculation inadvertently deprioritising certain demographics, making it hard for consumers to make the connection between AI and trust.

People are more amenable to AI in cases in which there’s a human component

Cian thinks with more exposure to effective, unbiased AI, consumer views will change. But Dr Keith Grimes, clinical AI and innovation director at digital healthcare service Babylon Health, believes it’s also essential to help consumers understand AI’s decision-making process, especially in sensitive areas.

“People get concerned about this ‘black box’ phenomenon, the idea that decisions get made, and they can’t work out why they’re made, or they can’t challenge them. When you’re working in healthcare, you have to be able to explain how automated decisions are made,” he says.

“If we take care with the messaging around how we use AI, we can help reduce some of that anxiety and people will feel more comfortable,” Grimes concludes. It’s sound advice for businesses across all sectors.

Who we trust and when?

Marketing researchers Chiara Longoni and Luca Cian’s newly published paper, Artificial Intelligence in Utilitarian versus Hedonic Contexts: The “Word-of-Machine” Effect, highlights varying scenarios where consumers are more likely to favour an AI recommendation system and, conversely, where human input is preferred.

In real estate, haircare, food and clothing, the majority of users chose the human recommendation over artificial intelligence (AI) when asked to focus on experiential and sensory attributes, such as style, taste or scent. When tasked with focusing on practical elements, such as use-case and function, most people opted for the AI recommendation. 

Yet the researchers note it doesn’t mean AI should only be used when it comes to more utilitarian products, such as technology or household appliances, or that companies offering more hedonic items, such as fragrances or food, shouldn’t be using an AI recommendation system. In an experiment where they framed AI as supporting human recommenders rather than replacing them, the AI-human hybrid recommender fared as well as the human-only one.