Why we shouldn’t blindly follow tech

How much confidence do you have in artificial intelligence and data? We have reached a tipping point when it comes to AI and trust in the workplace. That’s if the findings from a recent Oracle report are to be, well, trusted. In October the technology giant released its second annual AI at Work study showing that on average across the globe 64 per cent of people trust a robot more than their human manager.

In India and China the numbers rise alarmingly, to 89 per cent and 88 per cent respectively. Meanwhile, according to the research conducted by Oracle and Future Workplace, in Britain the figure is a more prudent 54 per cent. “The study of 8,370 employees, managers and HR leaders across ten countries, found that AI has changed the relationship between people and technology at work, and is reshaping the role HR teams and managers need to play in attracting, retaining and developing talent,” the study concluded.

Why AI and trust don’t always go together

Sir Nigel Shadbolt, chairman of the Open Data Institute (ODI), which he co-founded in 2012 with the inventor of the World Wide Web, Sir Tim Berners-Lee, warns that AI and trust don’t necessarily go hand in hand. Moreover, it can be a critical error to follow decisions determined by feeding workplace data into a human-created algorithm.

“That over half of British workers say they trust AI over their managers is worrying,” starts Sir Nigel, pointing out that robots make plenty of mistakes, much like their human counterparts. “It may reflect that we, as a collective, think the computer’s decision making is truly objective. It is stuffed full with all the biases and preferences of those collecting the data.

“The AIs work by hoovering up large amounts of data whose provenance and fairness – and therefore, the decisions they make – are highly questionable, in isolation. We have these funny ideas about the objectivity of machines, and if there is a bug in a programme we tend to freak out.”

Algorithmic transparency is key

ODI research, published in November, suggests that in terms of AI and trust organisations can do much more to convince consumers and workers alike that their personal data is being handled in a morally correct fashion. Indeed, almost nine in 10 respondents (87 per cent) said they feel it is either “important or very important that organisations they interact with use data about them ethically”.

Perhaps this result should not be a surprise, given the recent backlash following the Cambridge Analytica scandal from early 2018, when data from millions of Facebook profiles were harvested without consent, and used for political advertising purposes. And as Europe is leading the way, with the General Data Protection Regulation, awareness of the subject of data privacy is high. Yet this makes Oracle’s findings about AI and trust in the workplace all the more puzzling.

Because the Government has not set in stone a code of data ethics, Sir Nigel posits transparency and showing the workings of AI is imperative for businesses in 2020. “We are not Luddites about AI,” he continues. “We all understand the vast benefits of workplace data, and customer data, but we need to ensure that there is still linkability and transparency.

“People are paranoid about how outcome classifications are working, and how they are segmented – when these algorithms run, they will reveal my sexual orientation, or my political preferences, and so on. I want control over that, and algorithmic transparency is a great way to reset the balance.”

Opening more data

Sir Tim calls for greater use of co-called data trusts, which the ODI defines as “legal structures that provide independent stewardship of data”. “With regards to making data open, available and useable, there is still unfinished business,” says the 64-year old. “We believe soon large, trusted combinations of organisations will allow people to share more data than they could beforehand. These trusts are a new way of generating more value out of data.”

He also stresses the need to take advantage of workplace data, to improve both products and models through increasing the number of data points. For instance, it is only through more sensors that we have learnt the actual speed of climate change. But it serves to highlight that the Internet of Things revolution, which should be kickstarted when 5G becomes more readily available in 2020, will enable business leaders to improve their organisations or goods and services, and quickly.

However, one thing that most people have overlooked is the sizeable gender data gap in the workplace. That was before Caroline Criado Perez’s Invisible Women, published in March 2019, shone a light on the inherent bias that women suffer daily.

The feminist author, who is credited with successfully lobbying for the only woman – Jane Austen – to make it on to a British banknote, argues convincingly that the “default male” is the figure our world is designed around. Ms Perez points out in her book that female toilet queues are always longer – because equality does not always work – and women are likely to suffer more significant injuries in car crashes, as crash-test dummies are male.

Mind the gender data gap

“There are so many things leaders can do to narrow the gender data gap in the workplace,” she says. Offering more suitable toilets for women is one thing. Still, Ms Perez makes the point that it wasn’t until Facebook’s chief operating officer, Sheryl Sandberg, gave birth that she realised mothers and pregnant employees need to have unique car parking spaces closer to work. “It’s a great example of how the gender data gap is not a deliberate exclusionary strategy,” she continues.

“It’s not just about pregnancy parking; it’s also about flexible working and hours; it’s about maternity leave, as well as paternity leave. For instance, expense policies often need to be considered. In the book, there is a creative director who found that for a night away, her male colleagues could expense for the hotel and dinner, but that she couldn’t claim for her babysitter.”

Minter Dial, another author, whose Heartificial Empathy was published in November 2018, has identified another workplace gap, regarding empathy. For him, empathy is an underlying driver for customer loyalty and employee fidelity and profitability. He references Businessolver’s State of Workplace Empathy 2019 report, that shows 92 per cent of chief executives believe their organisation is empathic, but only 72 per cent of employees say the same thing. “That gap needs to be closed,” Mr Dial says, and ironically AI can assist. “Providing the intention is clear, and the ethics are empathically inscribed within, AI genuinely has a place in business.”

Returning to the Oracle statistic about AI and trust, he says: “I view it as an indictment of employee trust in management and the company for which they’re working. If employees trust AI more than their manager, the key point is that it’s a relative comparison. We are not seeing an absolute level of trust of AI.”

Do organisations need a chief AI officer?

Should organisations consider appointing a chief AI officer? “The CAIO’s primary role would likely be to establish an ethical framework and help mutualise learnings and infrastructure,” Mr Dial says.

“Down the line, though, the ongoing and more strategic challenge will be about the integration of human plus machine, the ethics behind the algorithms, and the treatment of privacy and security. And that requires more a chief ethics officer, with a full corporate remit – sitting on the board – than one merely focused on the AI.”

In the AI era, more human leaders and managers could well be the key to a successful workplace.