When it comes to shopping, convenience is the name of the game. Increasingly, one-click buying, contactless payments and same-day deliveries create a frictionless shopping experience. But is friction necessarily a bad thing? After all, removing friction creates a cognitive and emotional distance from our purchase. In other words, it makes handing over money feel less real.
At the same time, we increasingly shop online through algorithms. Whether that’s “customers also bought this” algorithmic recommendations or advertising tailored to our data following us around on Google and Instagram.
Algorithmic recommendations vs decision making
Businesses invest heavily in algorithms that recommend products, from films to clothes. Worldwide revenue for big data and business analytics is expected to reach $189.1 billion this year, a 12 per cent increase on 2018, according to the International Data Corporation.
So how do algorithmic recommendations influence our decision-making when we shop online? Do they make us more impulsive? “They give us opportunities to be more impulsive,” says Nick Lee, professor of marketing at Warwick Business School.
There are different theories on how advertising works. Professor Lee says there’s not much evidence for the “strong theory” that advertising changes people’s attitudes and makes them buy products they wouldn’t otherwise. Instead, the “weak theory” posits that advertising builds brand awareness: the more you see something, the more you become aware of it. The same is true for algorithmic recommendations.
In some cases, algorithms could do the opposite and actually make us less impulsive, argues Brad Love, professor of cognitive and decision sciences at University College London, and fellow at the Alan Turing Institute.
“It depends on the shopper’s habits,” he says. “If you’re faced with a thousand potential products that all vary in 20 different ways, the odds of finding the best product are close to zero. But if you’re shown five things through an algorithm, it could free up your mental resources to think: what do I value more in a laptop, the battery life or the processor speed?”
Giving gifts your computer chose
But what it we’re relying on algorithms to preselect gifts we buy for friends and family. Does this spare us the emotional labour of actually considering their needs and wants? Does it, therefore, defeat the purpose of gift-buying?
The way we’re perceived in the world is through our choices, notes Michal Gal, professor of law and markets at the University of Haifa. “If an algorithm knows us well, it becomes an extension of ourselves,” she says. “But even if the gift is better or more efficient in relation to the preferences of your friend, what’s missing is the human interaction, thinking actively about the needs and preferences of another person.”
However, Professor Love points out that shops already present us with a selection of goods, as they preselect and import products, catering to certain demographics. Besides, he asks: “Is drudgery what people value in this case? It doesn’t seem like suffering in itself makes a gift valuable, it’s about understanding another person.
“We need to make sure the drudgery aspect is automated, but that people don’t lose sight of what they value and what it is that makes a product joyful or useful.”
Even if people buy gifts based on algorithmic recommendations, they will probably still take credit for it. After all, we have a tendency to overemphasise our own contribution. “If computers really were shaping that choice, the consumer won’t give them the credit,” says Professor Love.
Do algorithms make us easier to fool?
Is it possible that we may gradually lose the inclination, or even the capacity, to engage critically when shopping? So could this make us more prone to deceptive advertising?
“The dangers aren’t the obvious ads,” says Professor Lee. “In developed economies like the UK, consumers are savvy about those.” Instead, it’s important to watch out for grey areas, such as influencers, who don’t clearly show that they’re being paid for pushing a product, or sponsored content that is not clearly labelled as such.
Crucially, when it comes to algorithmic recommendations, the medium is important. A recommendation we see visually is very different from one we only hear. It depends, for example, whether you’re searching on a screen or using a voice assistant. “With voice shopping, research has shown that the first recommendation often becomes what you buy, because you can only hear it, you don’t have any other parameters for comparison,” says Professor Gal. Visually, you might see more features of the product.
It’s also important to consider who the algorithm is employed by and what the incentives are. “If I write my own algorithm or it’s neutral, then the risk of deception might be reduced. But if the algorithm is created by a third party with incentives that are different from mine, and I’m not aware of it, then I might be less aware of deception,” says Professor Gal.
Using AI to buy simple items only
What’s more, research by Michael Yeomans of Harvard Business School has shown that people trust the recommendations of family and friends much more than those of algorithms, even if the latter is spot on. In fact, it’s precisely not knowing how an algorithm operates that erodes trust; known as the “black box” problem. That’s why Google recently launched its Explainable AI programme, for example.
Dr Yeomans says: “If we are taught to trust an algorithm without understanding, this can make it easier for all kinds of covert influences to pop up. In some ways, our intuition for relying on tools we can understand may be an age-old defence mechanism. And it is being tested in new ways in the modern marketplace.”
Critical thinking is paramount when we choose a partner, read the news or vote in an election. Buying tomatoes may be less profound.
“With some goods, algorithmic recommendations could save us a lot of time,” says Professor Gal. We have a limited amount of decisional energy that we use up throughout the day. “If the recommendations are actually ones that benefit us, then that could help us save decisional energy,” she says. And we could focus on decisions that matter the most.