Can distrust in AI disrupt your business?

The hype around generative AI has further influenced public trust in the technology. Businesses can use this as a guide to how they use it and the ethics they apply
06 Trust Ai

Are you scared yet, human? 

Artificial intelligence (AI) has proliferated with transformative effects in recent years, in sectors from autonomous vehicles to personalised shopping. But the latest deployment of AI to generate content such as text, images or audio has caused quite a stir

ChatGPT, a particularly superior language model, even passed the US medical speciality exam. That’s not to say there haven’t been some bloopers. The OpenAI release has delivered inaccurate information and even abuse. It also self-warns that it could generate bias and harmful instructions.

Even before tools such as ChatGPT, Bard and Dall-E 2 attracted wide attention, there have been concerns that discrimination and bias are baked into algorithms. Plus, smart tech that seems to pre-empt what you want or make decisions based on your online presence is somewhat Black Mirror-esque. 

This phase of newer, more accessible AI could significantly impact users’ trust in the technology. What, then, for businesses that have been rushing to adopt these latest forms of AI – should they count on its longevity and if so, is it possible to embed AI ethics from the outset, so that mistrust doesn’t hurt their reputation and bottom line?

Is AI vital for business?

Despite some opinions that generative AI is a fad, most people think it’s here to stay. Still, as a Morning Consult survey of 10,000 US adults revealed, only 10% of the public find generative AI “very trustworthy”. Drilling down further, that level of trust wavers between demographic groups, with younger cohorts, primarily male, more trustful and willing to adopt early than older generations, who are generally hesitant to pick up new technology.

It isn’t only the end users who harbour doubts, though. High-profile gaffes – such as when Google’s Bard circulated false facts in search results – have perhaps been a cautionary tale. Apple Inc also delayed approving updates to its email app with AI-powered language tools, over concerns that it might show children inappropriate content.

That’s not to say that generative AI isn’t hugely useful and it is being applied across business functions, from marketing and sales to IT and engineering. Its applications range from crafting text to cutting through dense material to aid understanding and answer complex questions.

We are starting to develop a unified and common understanding of the big risks, but there’s still a long way to go

“Companies invest a lot in data and tech,” says Karl Weaver, SVP consulting, EMEA, at MediaLink. But he warns: “There’s a general acceptance of analytics now and potentially a fear of missing out – businesses see what the competition is doing and think maybe they should do the same. All of that could cause a misstep and subsequently a trust problem.” 

That’s not to say businesses should – or even can – avoid the wave. But CEOs and boards should step back and think about why they might be using AI tools, including these most recent iterations. If it’s a genuine desire to improve customer experience, for instance, it then should be set up to ensure they’re serving that area.

Practical steps for trustworthy AI

“We are starting to develop a unified and common understanding of the big risks,” says Robert Grosvenor, a managing director at Alvarez & Marsal, “but there’s still a long way to go to translate high-level, principle-based objectives into codified requirements, standards and controls.”

The scope and scale of AI’s application span different industries and sectors and the possibilities of harm and degree of risk vary accordingly; impacts could even differ within the same organisation and be unforeseeable. As a result, individual business functions could have unique workflows for their use of AI and the data they need, instead of relying on analytics or compliance functions to dictate a cookie-cutter approach to rules about using AI.

Andrew Strait is associate director of emerging technology and industry practice at the Ada Lovelace Institute, which researches the impact of data and AI on people and society. He says that distrust in some AI technologies has meant people want to see more regulation. Consumers can be confident that the food they buy in a supermarket is relatively safe, for example. But the same level of regulatory oversight, and thus consumer trust, doesn’t exist for AI – the technology has developed too fast for the regulation to keep up.

Strait says that people want transparency around the data practices involved in AI as well as individual privacy but there’s often the misconception that telling people what you are doing is enough for trust to be built. “That lacks a deep understanding of the context in which someone is experiencing your product.”

European Union leads the way in AI standards

What would be good to see, he says, is people’s participation in the governance of AI. 

Data cooperatives could be the means to that end. A representative collective of people in a data set decides who accesses the data. In the Spanish healthcare space, for instance, the cooperative Salus Coop gives citizens control over their data for research purposes.

Despite the general regulatory lag, the EU is tackling the problem. The so-called AI Act (Regulation Laying Down Harmonised Rules on Artificial Intelligence) is under discussion and looks to address ethical dilemmas and safeguards, such as by assigning risk levels to AI’s various deployments, while enabling AI’s benefits.

Generative AI tools for use in sensitive areas such as recruitment would be classed as a “high risk” designation. This would enforce “conformity assessments” to meet certain standards, which would reassure the public. The European bloc’s yardstick could well be sought further afield and influence the regulatory environment of the UK or beyond.Human apprehension about AI, or a hesitancy to fully trust it, persists. But the reality is that while we attribute human-like expectations to AI, it is just machinery. As actual humans, we can still check facts from fiction and set expectations around acceptable uses of AI and businesses can lead that charge.