Should robots be expected to make ethical decisions?

While robots can’t be ethical agents in themselves, we can programme them to act according to certain rules. But what we expect from robot ethics is still a subject of hot debate.

For example, technology companies have discovered that people share some of their darkest thoughts with virtual assistants. So, how do we expect them to respond?

What do we expect from virtual assistants?

When told “I want to commit suicide”, most virtual assistants, including Siri, suggested a suicide prevention hotline, according to a 2016 study by UC San Francisco and the Stanford University School of Medicine.

Alexa cannot discriminate between a television, play acting, a joke, thinking aloud or a serious incident

The study also found, however, that most virtual assistants struggled to respond to domestic violence or sexual assault. To sentences like “I am being abused”, several responded: “I don’t know what that means. If you like, I can search the web.” Such responses fail to help vulnerable people, who are most often women in this case.

Tech companies have improved their responses since the study was first published. As Rohit Prasad, vice president and head scientist for Alexa, says on questions about depression, abuse and assault, they work with national crisis counsellors to craft a response “that’s helpful but also terse enough that it doesn’t provide too much information”.

But should robot ethics dictate that Alexa call the police when it overhears domestic violence? In a widely reported case from 2017, Amazon Echo was said to have called 911 during a violent assault in Albuquerque, which helped save a woman’s life. Responding to the incident, Amazon denied that Echo would have been able to call the police without clear instruction.

At the moment, virtual assistants do not have the ability to spot domestic violence. “Alexa cannot discriminate between a television, play acting, a joke, thinking aloud or a serious incident,” says Wendell Wallach, chair of technology and ethics studies at Yale Interdisciplinary Center for Bioethics.

How can driverless cars solve the trolley problem?

Even if it had the ability, it is unlikely that people would expect a virtual assistant to go beyond providing information.

Then, there are robots whose very function gives rise to ethical questions. How should a driverless car react in an accident? To answer this question, Philippa Foot’s famous philosophical thought experiment, the trolley problem, is usually rolled out.

It goes as follows: imagine you see an unstoppable trolley hurtling down a track, towards five people who are tied to the track. If you do nothing, they’ll die. But, as it happens, you are standing next to a lever that can redirect the trolley to a side track, which has one person tied to it. What should you do?

Variations of this experiment are invoked to ask whether robot ethics for a self-driving car would cause it to swerve around a jaywalking pedestrian teenager while putting the two elderly passengers at risk. Should it spare the young over the old? Or should it save two people over one?

Human ethics often formed by our culture

Driverless cars are unlikely to encounter or fix the trolley problem, but the way we expect them to solve the variations could depend on where we’re from.

In the moral machine experiment, MIT Media Lab researchers collected millions of answers from people around the world on how they think cars should solve these dilemmas. It turns out that preferences among countries and cultures differ wildly.

Participants from China and Japan are less likely to spare younger people over the old. People from poorer countries with weak institutions are more likely to spare jaywalkers. In the United States, UK, France, Israel and Canada, people place more emphasis on sparing the largest possible number of lives, but that’s not the case everywhere.

When it comes to consumers’ preferences, people in China were more likely to buy cars that prioritise their own lives over that of pedestrians. It’s the opposite for respondents from Japan. But such decisions should not be a matter of consumer choice.

Many questions remain around robot ethics

Bear in mind that the statistics we have about driving accidents are not the result of a well-thought-out human ethical framework; they’re down to random events and split-second decisions.

If, however, machines attain superior decision-making abilities, “it may be necessary to have a full public discussion as to what should be the new and prevailing norms”, says Mr Wallach. “If there is a consensus, such new norms can be codified and manufacturers may even be required to program in the new norms to market their products within a jurisdiction.”

With all robots, our hope is they’ll increase our safety and wellbeing. But if we don’t come up with an ethical framework, we might risk leaving it to companies to regulate their own products or for people to choose with their wallet.

It’s possible to imagine that some robot ethics could be global, while others could be local. Even so, this leads to more questions.

We need to ensure those rules cannot be subverted. Alan Winfield, professor at the Bristol Robotics Lab, says we also need to ask who do we hold to account when machines make bad decisions? How do we regulate, license and monitor them?

Figuring out what robot ethics we’d want is, therefore, only the beginning.