Unesco expert on how to tackle AI bias

Human prejudices are still finding their way into algorithms, which can make choices that reinforce socioeconomic inequality. Unesco’s senior expert in inclusion, Gabriela Ramos, is determined to stop the rot


For better or for worse, AI is becoming deeply embedded in business and wider society. A global survey of 2,400 companies by McKinsey & Co in June 2020 found that half had adopted the technology in at least one function. Although uptake in the public sector has been slower, according to research by the World Economic Forum, it is accelerating.

The result is that an increasing number of important decisions are being made – or at least informed – by machines. This presents a problem that’s troubling many industry observers, ethicists, policy-makers and business leaders. The fact is that, intentionally or not, human biases are finding their way into the algorithms that power AI systems, as well as the data sets they use. These are increasing societal inequality on a hitherto unimaginable scale.

We believe in the impressive power of AI. But if we don’t put some effective frameworks in place, it’s going to create backlashes. Then no one will talk about the good it’s doing

Among those most concerned by this pernicious trend – and most determined to find a globally applicable solution to it – is Gabriela Ramos, assistant director-general for social and human sciences at Unesco. Ramos, whose remit covers the ethical aspects of AI, explains the problem as she sees it. 

“AI is nothing other than an enhanced capacity to analyse data and come up with predictions, perceptions, information and outcomes. An algorithm is nothing but a mathematical representation of the problem you want to solve,” she says. “But we humans define the problems and the boundaries, training systems to recognise certain aspects that we want to address. Within this process, there may be assumptions, cultural traits, knowledge, ignorance or a lack of diversity that could lead to biased outcomes.”

When AI goes bad

One of the main causes of bias in AI, Ramos argues, is a form of collective blindness created by homogeneous groups of people when they work together to develop the technology. They are normally male and “Anglo-Saxon, usually with a certain culture and certain ways of looking at life”. 

That lack of diversity is a recipe for groupthink, she says. “You might not see it when you’re working, because you’re in your context, your culture, your environment, your network. But what we’re saying is that, when there is an outcome, you’ll need to be able to determine whether it is fair or unfair.”

She points to Amazon’s use in 2014 of a recruitment tool based on machine learning, which, it was claimed, had discriminated against female applicants for software development roles. “How did that happen? Well, it might be that the system’s database had overrepresented successful workers, who had usually been white, male, of a certain age and from certain regions.”

We need the capacity to protect citizens whenever they are affected by these technologies

Ramos can cite numerous examples of baked-in bias, including how the use of AI in parts of the financial sector is reinforcing socioeconomic exclusion. “If you use unrepresentative data sets including only people who currently have access to financial services and then introduce biases from your own mindset, it turns out that the machine recommends giving good credit ratings to white men,” she says.

Similarly, when GSCE and A-level results in the UK were moderated by algorithms last summer, “they didn’t control for socioeconomic outcomes. That’s a bias,” Ramos says. “This is no different from what happens in the world, because the world is biased. But what we cannot allow is for the technologies to run by themselves, or to be run by a very small share of the population, because the reality is that we must be inclusive.” 

The need for transparency and diversity 

Ramos believes that an important first step in tackling the problem is simply to put it under the spotlight. “Just by talking about it, we’re starting to solve it,” she says. 

This will be aided, she hopes, by the forthcoming publication of Unesco’s Recommendation on the Ethics of Artificial Intelligence, which aims to promote “a common understanding of the issues”. 

A further step is to start fixing some of the real-world imbalances affecting the technology. A lack of racial and gender diversity in digital industries is an obvious challenge, according to Ramos, who says: “The gender issue is huge, with the lack of female ICT students and women in software development roles. We also need to enhance the ability of the Global South to participate.”

This is no different from what happens in the world, because the world is biased. But what we cannot allow is for the technologies to run by themselves

She also recommends that organisations using AI should improve their procedures. “There are some simple practices that we are trying to advance, such as contesting a hypothesis, framework or model. For instance, some firms have divided their teams into those that plan developments and those that implement them. By separating these, you can create a checkpoint.”

To aid such a process, Ramos is planning for Unesco to produce an ethical impact assessment tool for AI. “This is a checklist of questions that covers the diversity of teams and the representativeness of data. It looks at outcomes and sees if they are having a discriminatory effect,” she says.

High time for regulation

Ultimately, it’s down to governments to introduce effective legislation to counter AI’s bias problem, according to Ramos. 

“We need the capacity to protect citizens whenever they are affected by these technologies,” she says. “But governments can’t do it alone. The task has to involve a wide range of stakeholders. As usual, the regulators are lagging behind developments. It happened with the financial markets and now it’s happening with digital technologies.”

It’s therefore “super-important” that law-makers work with businesses and big tech to make the algorithms they use less opaque, she says, calling for the adoption of principles rooted in accountability, traceability, explainability and privacy.

While Ramos acknowledges that over-regulation can stifle innovation – and cause compliance problems for multinationals if standards aren’t applied across borders – she argues that this factor must be weighed against the need to protect the interests of disadvantaged individuals and groups. 

“We have to balance the public good with many other competing objectives,” she says. “But these are mature technologies – solid developments that will cope well under a good regulatory framework.”

Despite the problems that algorithmic bias has created, Ramos remains in awe of the benefits that AI can deliver. As one example among many, she highlights the huge strides it has enabled in healthcare. 

“We have a Covid vaccine that was developed in just one year with the aid of these technologies,” Ramos says. “And their use by doctors has moved the accuracy of medical diagnosis to a higher plane.”

She adds: “We believe in the impressive power of AI. But we also know that, if we don’t put some effective frameworks in place, it’s going to create backlashes. Then no one will talk about the good it’s doing. So we need to build trust – and that’s exactly what we’re trying to contribute to.”