Three-minute explainer on… AI-washing

The transformative power of AI can really only be unleashed when there is widespread public trust, but recent high-profile gaffes might be holding progress back

Three-minute explainer

In 2016 Amazon unveiled its seemingly impressive gambit exhibiting the wondrous potential of artificial intelligence. With its Just Walk Out programme, visitors to Amazon Fresh supermarkets wouldn’t need cashiers at all. AI would do all the work and Amazon would automatically charge shoppers for the contents of their basket. 

There was, according to reports, a small catch: Amazon was additionally employing low-paid Indian workers to verify each customer’s receipts, viewing video remotely. This was certainly ‘artificial’ but hardly the artificial intelligence dream – more like outsourcing with extra bells and whistles. 

This Wizard of Ozzian parlour trick has reignited conversations around ‘AI-washing’, where businesses tout their machine-learning credentials while the real work is performed by real people behind the proverbial curtain.

What is AI-washing?

Put simply, AI-washing could be anything from exaggerating the abilities of an organisation’s deep-learning models to simply making everything up. For its part, Amazon said the outcry was due to a “misconception” that Just Walk Out relies on human reviewers watching shoppers live from India. “The underlying machine learning model is continuously improved by generating synthetic data and annotating actual video data,” said spokesperson Sarmishta Ramesh, adding its associates validate a “small portion of shopping visits” via video for accuracy. The Information’s report claimed 700 out of every 1,000 sales were manually reviewed. Whatever the truth, whenever there’s a new high-impact technology, there are usually some waiting in the wings to take advantage of hype and peddle snake oil. 

The Amazon affair prompted backlash when it resurfaced on social media this month. But, notes Janet Vertesi, associate professor of sociology at Princeton, the phenomenon of people doing the work promised by automation is so common, she and colleagues have termed it “pre-automation”. 

Vertesi describes AI as a “powerful decoy” that disguises the realities of offshoring to low-paid workers in developing economies. Evidently, some companies aren’t only failing to recognise the labour of these hidden outsourcers, they are also pretending that they are lines of code.

AI-washing: should I worry?

Technologists often say one of the biggest barriers to widespread AI adoption is a lack of trust. That’s a reason why so many businesses, including Amazon, have committed to ‘responsible AI’, with information-sharing, transparency and public reporting of capabilities or AI models. 

Such commitments do not seem to be working. In the business world, both employees and senior executives remain fearful of the technology, with around half of each group showing trepidation according to a recent Workday study.

The latest accusations levelled at Amazon show why. For any organisation leading AI implementations it’s important to be honest. In time, businesses will have to be more upfront about their actual capabilities anyway.

Legislators in the EU understand that “trust is a must” and are regulating accordingly. With ‘explainable AI’, it’s not just users who must be informed about the decision-making processes of machines. Everyone who could be impacted by it, or a data point contributing to it, should also be able to understand the whys and the hows. 

In November 2023, the UK government introduced legislation to deal with another AI-washing example. With the Autonomous Vehicles Bill, British lawmakers are establishing a legal framework to put a stop to manufacturers marketing vehicles as ‘autonomous’ when they are more like regular cars with digital trinkets. In short, AI snake oil may soon be a criminal matter.

The Amazon example appears to be more of a cynical corporate fudge than a Terminator-style AI apocalypse. But what better example to highlight the importance of building trustworthiness? If businesses care at all about their reputation, then they should tread carefully to ensure they are not misleading their customers or the public with AI-washing. Transparency is absolutely key, and organisations should think long and hard about how to develop their AI strategy openly and honestly, lest distrust threaten to disrupt their whole business or cause AI fatigue among the general public.