Fears of reputational damage holds back AI deployment

Adoption levels of artificial intelligence and automation tools could be hindered by organisations’ fears about reputational damage in the event of a scandal

Although opinion is divided on whether the coronavirus crisis will hasten a move towards more automation, interested employers have a number of issues and challenges to address concerning artificial intelligence (AI) if they are to avoid possibly damaging their company’s reputation.

On the one hand, says Jen Rodvold, head of digital ethics and tech for good at digital transformation consultancy Sopra Steria, there has already been serious acceleration in the adoption of all kinds of digital technology to enable businesses to operate during lockdown.

“As the economic picture sharpens, there’ll be a continued focus on cost-cutting measures and AI could well be part of that,” she says.

Brian Kropp, chief of human resources research at market research company Gartner, on the other hand, believes that while the crisis will not necessarily alter pre-COVID-19 adoption rates, it will change the reasons behind adoption.

“Until this year, investments were largely driven by the tight labour market and the number of qualified employees available,” he says. “However, AI and automation strategies are now shifting towards reducing operational risk and improving long-term business resilience.”

Use-cases include replacing employees in areas of potentially high-infection risk or in situations where illness would result in disruption to production processes.

But there is also the danger that such technology could introduce risks of its own, particularly in terms of company reputation. As Kropp indicates: “Even before the coronavirus pandemic, Gartner predicted that the number of automation-related scandals would grow over the course of 2020 as adoption increases and deployments take place in a range of new areas.”

The problem is that, although AI can be a powerful tool to support decision-making, “in cases where AI systems base assumptions on patterns of historical data, there is a danger of bias”, he says.

Secondary ripple effects

An example of this situation was US health services provider Optum’s healthcare allocation algorithm. Although subsequently amended with the help of researchers at the University of California in Berkeley, black patients initially received lower standards of care than their white counterparts as they were assigned lower risk scores.

The issue was that the system apportioned these scores based on predicted healthcare fees, but as black patients are less likely to receive targeted interventions, their care cost on average was $1,800 less a year. While the developers had excluded race data from their algorithm in a bid to make it “racially blind”, societal discrimination had not been taken into account.

Using AI for decision support is less risky. It’s when algorithms start making judgments that the trouble can set in

In other words, even if processes are put in place to check for bias in the data in which algorithms have been taught to look for patterns, organisations still need to be “cautious of the secondary ripple effects” that might not be immediately obvious, warns Kropp.

In addition, as AI becomes embedded in more and more interlinked decision-making processes, it will become increasingly difficult to pinpoint where the real problems actually lie or to anticipate possible consequences.

Andrew Liles, chief technology officer at interactive marketing agency Tribal Worldwide London, believes that the highest risk of experiencing AI issues occurs when organisations shift the focus away from using the software as a decision-support tool towards allowing it to make decisions without human intervention. This is particularly true of business processes that require high levels of cognitive ability and significant human interaction, such as interviewing candidates for jobs.

“If you use AI for decision support, it’s about speeding up the process and making it more efficient, which is less risky. It’s when algorithms start making judgments that the trouble can set in,” he says.

The problem is that getting this situation wrong can have serious implications for company reputation and brand. Although awareness of such potential AI issues and challenges may be starting to grow, AI ethics is still not widely discussed or well understood among the C–suite. This means too few companies have established a common set of rules and standards upfront that employees need to apply when using such technology for decision-making.

To ensure AI is applied safely and responsibly, “it’s important to take a more systematic approach to analysing the technology’s up and downsides, and how it links to business strategy, objectives, culture and people”, says Rodvold. “So it’s not about dealing with technology in isolation; it’s about anticipating and examining its possible consequences in the round and putting the right strategy and policy in place to deal with them.”

Dealing with the ethical questions

Important topics to think about in this context include the impact of AI on employees, customers and wider society. Other central considerations comprise its implications in terms of equality, diversity, privacy, transparency and environmental sustainability.cultu

But a key problem that stops many organisations from taking action, Rodvold acknowledges, is a fear that the software is too complex to get to grips with, which leads to many senior executives simply “putting their head in the sand”.

A lack of government regulation or legislation anywhere in the world in important areas, such as the transparency and so-called explainability of the assumptions on which algorithms have been built, also does not help.

Trust and AI adoption

But Danilo McGarry, thought leader and head of automation at investment fund and corporate services provider Alter Domus, recommends two documents as a useful starting point to help leaders understand some of the AI issues they face and how they might deal with them. These are the European Union’s Ethics Guidelines for Trustworthy AI advisory document and the European Commission’s White Paper on Artificial Intelligence: A European Approach to Excellence and Trust.

“They’re not enforceable, but they’re also not technical and are written for C-level executives,” he says. “So they’re useful as a basis for writing your own policies on how to use AI in an ethical way.”

Ultimately, says Rodvold, taking an ethical stance is not just about protecting company reputation from harm. It is also about “value creation”.

“Establishing principles of fairness, trust and inclusiveness all create better companies and puts them in good stead for the long term. So it’s about doing what’s right for the organisation and its culture to sustain value,” she concludes.

Risky recruitment

Amazon hit the headlines for all the wrong reasons in October 2018 following revelations that the artificial intelligence-based recruitment tool it had been developing secretly demonstrated bias against women.

The project had been intended to automate the sifting of job applicants’ CVs by assigning them a score ranging from one to five stars. But by 2015, a year after the initiative was first launched, the Amazon team realised that candidates for technical roles were not being handled in a gender-neutral way.

The problem was the data being fed into the system, which comprised ten years’ worth of CVs, reflected the tech industry’s male dominance. As a result, the software taught itself to prefer male job seekers and to give women lower scores.

Although Amazon initially tried to edit the algorithm to make it respond neutrally to female-oriented terms, such as “women’s” as in “women’s chess club champion”, there was no way to guarantee gender discrimination would not be introduced in other ways. As a result, the initiative was scrapped in early-2017. Amazon insisted the system had never been used in anger by its hiring team and so no candidates had been discriminated against in the real world.