Q&A: Can computers have ‘conversations’ with humans?

Computers have become a lot smarter; we can take that as read. What actually matters now in terms of the way computers react with us, perform speech recognition and exhibit forms of artificial intelligence is not as straightforward as our machines simply being able to “hear” us or react to our commands. We now expect computers to actually understand us and develop a level of contextual awareness related to what we are talking about. It’s as if we actually expect to have “conversations” with our devices. At natural language interaction specialist Artificial Solutions, we have created a new bridge in the conversational conduit between humans and machines.

Why is computer speech recognition such a big ask in the first place?

Human languages are riddled with colloquial nuances, changeable dialects and a multiplicity of accents. Then there is the challenge with homonyms – words spoken the same way but with different meanings – so it’s tough for a machine to know the difference between site, sight and cite, for example. Cricket is a game, but it’s also an insect and so on. We have built a level of contextual analytics into speech recognition so machines will logically reason that sports cricketers seldom talk about insects, unless they are in Asia perhaps, but then we can program that location awareness in too. Think about how many ways we humans say yes: yep, yeah, exactly, OK, affirmative, righto and ten-four. This is tough IT engineering.

Why is it so difficult to give computers conversation power?

As clever as it is, plain old automated speech recognition technology is ultimately starting to become commoditised and in some cases made free. What we are doing is breaking sentences up into blocks and providing contextual conversation memory so, for example, a virtual assistant can thread one request to the user’s previous question or comment. We have built our Indigo virtual assistant to showcase this power. If I ask “tell me how to get to Liverpool”, then Indigo will offer map directions. If I then restrict my request to “and now Bristol”, then Indigo knows I am still having a conversation about directions and so offers more route options. It does not treat it as a new topic and offer Wikipedia pages on Bristol. It is at this crucial point that we start to give computers conversation power.

So will Artificial Solutions make computer conversations more human-like?

It’s not just a question of them being more human-like, although we have created a new level of informal realism that is more tangible and can even be chatty if you want. What we are actually building with Teneo is a computer brain that’s smarter – one that can automatically tailor communication based on each unique interaction, one that can track historical

Interactions, such as human memory power, and one that has “meta-level” awareness of the rest of the world, albeit if that awareness is drawn from the internet. The free-format unstructured content in most human conversations makes it hard for computers to understand a user’s true intent. We use a hybrid combination of machine-learning and a rules-based software engine to achieve this. Machine-learning is all about software being able to crunch through a corpus of information, so it provides massive breadth to the machine brain. Rules-based engines allow us to be much more specific about decisions based on defined intelligence, so that gives us precision. When you mix breadth of knowledge with precision intelligence, then you get smart people or smart machines.

Who uses this kind of conversational intelligence technology?

The implementation of this kind of technology in the real world works particularly well where clients have large customer bases and need to automate conversations between customers and companies – think telecoms, financial firms and the modern web-connected retail business. It also works effectively in the travel and leisure business, and utilities. These are the types of firms that can benefit from automated intelligence to handle customer, and often employee, requests at a more sophisticated level. Essentially they tend to be enterprises with a need for multiple language solutions. This is why we have built Teneo with a specific integration element so firms can use our software in a sort of white label or “vanilla” format. This means they can use it to drive the front end of their own user interface depending on the industry, use case and the firm. We think natural language technologies will be as fundamental to company business by 2020 as a firm’s website is today.

Are humans ready and welcoming the idea of talking to machines?

A recent global research study has suggested that 68 per cent of people are already using a voice assistant service such as Apple Siri, Microsoft Cortana, Amazon Alexa, Google Now and our own Indigo. At this stage of human acceptance of these technologies, 90 per cent of us say we want to know if we are speaking to a virtual assistant or a human because disclosure is vital. In the next five to ten years, we anticipate people actually won’t mind.

Are we in danger of our computers becoming self-aware and taking over the world?

We are already building computer intelligence with all the worst-case scenarios in mind. Firms who use these types of systems don’t want their conversation engines suddenly starting to recommend another competitor’s product. So while we keep the doorway open for the computer brain to learn, we also define the parameters of knowledge within which it is allowed to educate itself. While we may allow for an element of humour, we can program against any suggestion of sexism, racism or any other inappropriate behaviour or sensitivity. We, as humans, like to think we’re the smartest things on the planet. For now I think we’re all happy to keep it that way.

For more information please visit www.artificial-solutions.com