Is your digital transformation ethical?

The possibilities offered by digital transformation are becoming more elaborate by the day. In the absence of comprehensive regulation, how can companies balance ethics with the benefits of adopting the latest technologies?

Art2 Istock 1249210626

The explosion of transformative technology in recent years is hard to ignore. Generative artificial intelligence (AI) is the latest craze, of course, but the metaverse, hyper-personalisation and cloud technologies were making headlines long before that. 

While some businesses might be inclined to exercise caution here, rapid deployment and expenditure – $3.4tn (£2.7tn) across all digital transformation technologies by 2026 – will ultimately pile on the pressure, requiring the more hesitant players to dive in as these new technologies increasingly give early adopters a competitive edge.

The problem is that digital transformation has a habit of raising difficult ethical questions. This may yet be addressed through regulation – data privacy issues, for example, have been well served by the GDPR and the yardstick it provides for non-EU jurisdictions – but lawmakers have generally been slow to catch up with the latest innovations. And key trends and their harms, such as employee surveillance and algorithmic biases, aren’t well covered. 

The same goes for the much-discussed problem of workforce redundancies due to AI and automation. This has already come to fruition in certain industries and more layoffs seem likely as the technology progresses and more of us are replaced by machines.

How, then, can firms address the ethical issues in their digital transformations? Should they be looking to regulators to provide a baseline or is it wiser to proactively embed accountability and standards internally? And what might that entail?

Why more tech means more problems

“Nobody wants to do business with a racist or homophobic company and AI can sometimes raise issues associated with that,” says Adnan Masood, chief AI architect at digital transformation solutions provider UST. Indeed, the reputational cost to businesses when algorithmic technology causes harm can be significant.

Such harms are well documented. AI chatbots are already notorious for outright racism and sexism. The Cambridge Analytica scandal – where the company collected millions of Facebook users’ data for political advertising – landed the social media giant with a fine of almost $650,000 in the UK for its failure to protect users from data misuse. On the redundancies front, the move to a mostly digital banking platform prompted TSB to eliminate 900 jobs and close 164 branches in 2021. 

Nobody wants to do business with a racist or homophobic company but AI can sometimes raise issues associated with that

Increasing algorithmic management, typically associated with the gig economy, is also becoming more common in other sectors, from optimising delivery and logistics to tracking workers and automating schedules in the retail and service industries.

Meanwhile, consumer-sourced rating systems are increasingly being used to evaluate workers. This can create a culture where pernicious issues such as abuse and sexual harassment incidents going unaddressed may take root, as workers stay silent to avoid a bad rating. 

These tech-enabled dilemmas demonstrate why up-to-date regulation is important. So far, there has been relatively little movement on this beyond data privacy, although there are indications that AI regulations may soon be in the works.

The only way is ethics

The regulatory outlook is complicated further by the fact that digital transformation is a moving target. Even the existing data privacy guidelines may fall short over time because we’re often unable to foresee where prescriptive detail will be required. 

Take the right to explainability of how an algorithm comes to a decision. “That’s virtually impossible when it comes to black-box models like neural networks,” says Masood. “You cannot explain how a neural network works.”

The good news is that as harms and challenges become more apparent, the conversation around digital ethics is getting louder. Companies are increasingly weighing up how to manage these ethical dilemmas themselves, including by self-regulating and even prioritising ethics above sales. For example, following the Black Lives Matter protests in 2020 and the killing of George Floyd in the US, IBM declared that it would no longer provide facial recognition products to police forces for the purposes of mass surveillance and racial profiling.

Some even question whether regulators are capable of stepping up to address AI, or whether it should be left to the tech players. “A lot of the regulators are not doing the job – they’re not part of that field of AI models and they tend to have a fear-based mentality,” says Oyinkansola Adebayo, founder and CEO of Niyo, a group of brands focused on the economic empowerment of Black women through tech-driven products. 

“Regulation is stifling innovation now,” she continues. “We need a collaborative approach with the people building it, to challenge the build as it happens, rather than at the borders.”

Why humans are still central

One way for businesses to start straightening out ethical issues in their digital offering is to ensure they aren’t perpetuating a skewed view of the world. 

“Less than 2% of the tech industry is made up of Black women specifically,” Adebayo says. She argues that addressing the gender and racial imbalance in tech workforces would result in a greater diversity of thought, meaning fewer ethical issues should slip through the net.

Rehan Haque, founder and CEO of Metatalent.ai, also stresses the importance of human capital within any digital transformation. When he built his company, for instance, he focused on upskilling, reskilling, cross-skilling and redeployment to equip people to handle emerging technologies. “Humans were the most important thing from an investor’s point of view, and then technology,” he recalls.

That’s all well and good, but will it be enough to assuage customers’ worries? To help firms keep pace, could the likes of AI be put to work to help on the ethical front? 

It’s a prospect offered by a set of principles the EU has been working on for more ethical approaches to AI, also known as ‘ethics by design’. Similar to the concept of privacy by design, companies are encouraged to build respect for human agency, fairness, individual, social, and environmental wellbeing and transparency into their AI models, as well as the familiar principles of privacy, data protection and data governance.

But trusting technology to solve technological problems could lead to a whole other set of concerns. “One of the ways to identify whether certain work has been done by AI is to use AI to check,” says Professor Keiichi Nakata, director of AI at Henley Business School’s World of Work Institute. “Of course, it’s a cat-and-mouse game because both sides will improve and become more evasive.”

In short, regulators are too slow to keep up with emerging technologies, and the technologies themselves can’t solve all our ethical problems. But we can’t afford to ignore the harms threatened by widespread and unchecked digital transformation either. Guidelines for ethical design could be helpful, but it will take time for the tech industry to adopt and implement these. In the meantime, it’s up to businesses to leverage the experience of a broad range of players to guard against ethical issues as their digital transformations take shape.