AI ‘victims’ of sexism answer back

Last year saw conversational artificial intelligence agents facilitating personal and business interactions, and providing real-time customer service and business support. Most of these “assistants” default to a female voice, although users sometimes have a choice. You can change the settings on Apple’s Siri, while Google Assistant may be gender neutral with no name but comes with a default female voice. Microsoft describes Cortana as gender neutral, but the name comes from the AI in the game Halo which is represented as a female hologram. Amazon’s Alexa describes herself as “female in character”.

This gender bias is not down to sexism by predominantly male developers. Rather, it is a reflection of outdated social norms and the gender imbalance in the work force that many big tech players are working to redress. It also mirrors ingrained human perceptions.

As AI agents combine natural language processing (NLP) with machine-learning, they need to be monitored and guided to ensure that they learn from positive examples and do not take on prejudices that may be inherent in the data they use. This was illustrated by Microsoft’s ill-fated Tay experiment, which “learnt” racist and other inflammatory statements, and it is particularly important when the outcome involves a human decision, such as in recruitment.

Ben Taylor, founder and chief executive at Rainbird, an AI platform that models cognitive reasoning processes, highlights the main challenge around AI and prejudice as unconscious bias. He says: “Rainbird works with expertise, so its analyses are transparent and auditable, but when AI applications analyse big data and machine-learning adjusts the algorithm on which decisions are made, it is impossible to know what features it is basing its decisions on. And we have an unequal society.”

As voice becomes the default interface, tech giants and brands are refining NLP to make AI agents sound more human. Tacotron, a text-to-speech synthesis model, changes its intonation in response to punctuation, emphasising capitalised words and lifting the pitch if there is a question mark at the end of a sentence. But how human can the user interface get before it reaches “uncanny valley” or the point at which people feel uncomfortable with things that appear nearly human? And to engage us and provide a natural, effortless interface, do AI agents need a gender?

AI with personality

IPSoft’s cognitive knowledge worker Amelia that handles customer service and internal business support appears online as a blonde, female avatar. However, as Mr Taylor observes, some people find gender stereotyping off-putting.

In March, Capital One launched Eno, a gender-neutral chatbot that responds to natural language text messages, showing customers their account balance or paying credit card bills. Eno describes its gender as “binary”, but the banking bot still has “personality”. Its favourite colour is green.

Dennis Mortensen, founder and chief executive of x.ai whose AI assistants organise meetings via e-mail, disagrees. “Some people prefer working with a female assistant, while others prefer a male,” he says. Consequently, x.ai users can choose Amy Ingram or Andrew Ingram that have different identities, but identical personalities.

Ideally, users should be free to choose the gender they feel most comfortable with

Although gender makes Amy and Andrew more engaging, and therefore more efficient as their human qualities encourage people to respond quickly to their e-mails, Mr Mortensen reiterates that it is important not to take anthropomorphisation too far. The fact that an AI agent is given a personality and a gender doesn’t make it human, but helps it fulfil its primary purpose.

Because x.ai operates in a controlled environment, Amy and Andrew respond to specific input data – people, times, locations – and the output data is a date and time for a meeting. If you ask Amy or Andrew anything else – and they get some bizarre requests – they bring the conversation back to the task in hand.

The default voice of Apple’s Siri virtual assistant is female, but users are able to change their settings to male

The default voice of Apple’s Siri virtual assistant is female, but users are able to change their settings to male

Gender bias

According to mobile analyst Benedict Evans, partner and consultant at Andreessen Horowitz, giving assistants with a broader remit a “personality”, which may include gender, can conceal limitations while maintaining engagement. For example, Siri may tell a joke instead of saying, “I do not understand the question”.

Personality also establishes the agent as an independent application. “Google’s voice product has no human name because it is positioned as a universal aspect of Google, not a separate siloed product,” says Mr Evans. However, this raises the question how to address a voice-only interface. “You need a keyword to invoke it – so that it responds to the next thing you say,” he says. It needs a name and this raises the gender issue again.

Aaron Miller, director of solutions engineering at Agent.ai, a startup focused on customer support chatbots, agrees with Mr Mortensen. “Ideally, users should be free to choose the gender they feel most comfortable with. But the message is more important than the medium. The words that Stephen Hawking expresses via his synthesised voice far outweigh their mode of delivery,” he says. “As gender roles shift, our attitudes towards AI and one another will evolve because gender doesn’t have a traditional role in this new world order.”

Jason Alan Snyder, chief technology officer at Momentum Worldwide, says: “A brand is a metaphor for a story and the chatbot’s personality, and potentially its gender, are part of its story and therefore its brand identity. But we don’t need to make AI confirm to the gender binary of humanity in order to like it.

“It is human nature that we anthropomorphise AI. We’ve been talking to objects for years, and we give them names and genders, but now that they are talking back and taking decisions about us and on our behalf, we have a moral duty to take a humanistic and responsible approach to AI.”