Trust is a must: why business leaders should embrace explainable AI

The EU’s proposed regulation on artificial intelligence has earned widespread praise. The prospect of harmonised rules presents an ideal opportunity for firms to improve transparency and reduce bias in their processes by investing in AI that’s easier for humans to understand 

New EU regulation aims to make artificial intelligence more trustworthy

The European Commission vice-president responsible for media and information matters, Margrethe Vestager, neatly summarised the founding philosophy of the EU draft legal framework on AI at the time of its publication in April. 

“Trust is a must,” she said. “The EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide.” 

Any fast-moving technology is likely to create mistrust, but Vestager and her colleagues decreed that those in power should do more to tame AI, partly by using such systems more responsibly and being clearer about how these work. 

The landmark legislation – designed to “guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation” – encourages firms to embrace so-called explainable AI.

If we want AI to play a role in decision-making, then we have a right to understand how the AI came to a decision, regardless of its complexity

Most business leaders have welcomed the initiative, understanding that the goal is to increase public trust in AI by promoting the use of more transparent systems. 

Peter van der Putten is director of AI solutions at cloud software firm Pegasystems and an assistant professor of AI at Leiden University in the Netherlands. He believes that the EU has produced a “sensible, risk-based framework” that distinguishes “prohibited, high-risk and low-risk” AI applications from each other.

“This is a significant step forward for both EU consumers and companies that want to reap the benefits of AI but in a truly responsible manner,” he says.

The end of ‘computer says no’ 

Given that many organisations are using opaque algorithms to make significant decisions – sometimes with disastrous results – the creation of a legal framework that would encourage them to adopt explainable AI is welcome. So says Matt Armstrong-Barnes, chief technologist at Hewlett Packard Enterprise. 

“If we want AI – constructed using complex mathematics – to play a role in decision-making, then we, as citizens, have a right to understand how the AI came to a decision, regardless of its complexity,” he argues. “Explainable AI can answer the fundamental question: why? Once we know this, the decision can be evaluated to ensure that it’s made without bias. ‘Computer says no’ is no longer acceptable or desirable.”

Pip White, MD of Google Cloud in the UK and Ireland, agrees. “Your ability to understand your AI and machine-learning models entirely is key to your ability to roll out the technology confidently, particularly in regulated industries where trust is critical,” she says. “It’s also paramount in helping to unpick bias and other gaps in data or models. Ultimately, the more informed you are about the ‘why’ of AI-driven decisions, the more useful and responsible your AI deployments will be.”

But not all experts believe that that the draft law, which proposes fines of up to 6% of a company’s global revenue for the most severe breaches, will have a sufficiently positive effect if enacted in its current form. 

By setting the standards, we can pave the way to ethical technology worldwide


“You have to admire the EU for arriving late to the party and telling everyone to turn the music down,” says Mark K Smith, founder and CEO of ContactEngine, a conversational AI company. “I agree that AI needs regulation, but a regulation that stifles innovation would be unhelpful and lead only to developments being encouraged elsewhere.”

A well-timed reset

Van der Putten, who stresses that AI was never intended to replace human intelligence, believes that the proposed law will serve as a “reset moment” for the technology and its proponents, because it will help to improve trust. 

The EU’s intervention is timely, concurs Joe Baguley, EMEA vice-president and chief technology officer at enterprise software firm VMware. A survey by his company at the start of this year found that only 43% of Britons trust AI.


“This absence of trust can be attributed to AI’s perceived lack of transparency, which must be a key consideration for business leaders,” Baguley says. “There is no doubt that AI has the potential to revolutionise the workplace and society, but the need for explainable AI will become more pressing, as fears about the technology remain high.”


He continues: “If developers themselves don’t know why and how AI is thinking, this creates a slippery slope, as algorithms keep becoming more complex. Offering the public more insight into how AI makes decisions will give them more confidence and, in turn, help them feel more secure about the organisations that use the technology.”

Kasia Borowska, managing director of AI consultancy Brainpool, believes that the rest of the world needs to catch up with the EU in regulating the technology. 

“The next step needs to involve making these regulations international, because uneven laws between different blocs could have catastrophic consequences in the long term,” she warns. “International leaders should look at this urgently. We know that AI will give unparalleled advantages to those in less controlled countries.”

How should businesses in the UK respond to the lead that Brussels is taking? “Be more guide dog than guard dog,” advises Caroline Gorski, group director of R² Data Labs at Rolls-Royce. “Create your own simple framework that meets the EU requirements. Focus on defining what can be done rather than what can’t, then break it down into steps, with auditable standards for each step. Join them all up and create a procedure.”

Simon Bullmore, co-founder and CEO of data-literacy consultancy Mission Drive, suggests that firms seeking guidance on explainable AI should engage the Open Data Institute, the Alan Turing Institute and the Office for Artificial Intelligence.


He urges business leaders to treat the EU’s initiative as a chance to invest in explainable AI – and to educate both themselves and their employees in the technology. 

“Regulators step in when they lose trust in the market’s competence and desire to self-regulate,” Bullmore says. “Part of the challenge of using AI is the disconnect between what leaders know about AI and what their organisations are doing with it.” 

Now that the rules of the game are changing, it will be the proactive leaders that gain the competitive edge by going back to basics with AI.