When a robot says “my bad”….

David Wright
4 min readJul 16, 2017

--

Right now, I was sending an email from Outlook and mentioned the word attachment in the copy “he has a strange attachment to his Echo”. I sent it, but first Outlook prompted me, “hey you didn’t attach anything?”. These little prompt buttons also appear in Gmail, LinkedIN and Facebook Messenger. They know from the millions of messages they see everyday, how people usually respond or where a sender makes a mistake “sorry guys here’s the attachment I promised”.

So they offer button tap/button response suggestions [I’m busy right now] [I’m not interested] [Ok, I’m interested I’ll get back to you later][Sounds good I’ll meet you there]. This subtle AI is everywhere, it is the AI Iceberg, the auto-complete of everything .

The non-subtle interface of AI is handed by Digital Assistants, where people interact with software via chat/voice, like they were talking to a helpful friend. This is the current cutting edge and it is a brutal place to work in.

Here is a classic example from a colleague trying a research chatbot.

Where I find ‘talking’ to bots frustrating is when clearly the bot and I are seemingly at cross purposes. For example I said “ WhatsApp was something I liked — now there is such a deluge of messages every day, I’m getting a bit fed up.” The bot asks “What’s behind that?” I thought the idea of a deluge of messages was crystal clear, so I said, “I don’t understand what you want to know.” And the bot replies, ”That’s really helpful XXX.” — and then I really don’t feel like continuing the conversation.

The Chatbot can identify keywords and maybe some sentiment, but has no real understanding or context about what was just said. The best you can do is come up with a formula, which calculates the richness of the response by counting the number of keywords triggered (a classifier), the number of words and unique words, the sentiment etc…

So companies like Amazon (Echo/Alexa) try and solve this by asking the developer to identify all the likely things that someone will respond with, for a given question. This is categorized as trying to understand the users “Intent”. However a typical Amazon style “Intent” is really about a user trying to achieve something specific like buying something, searching for something or issuing an order to an IOT device (turn the lights on!).

The intent framework does work when a bot is trying to work out what is troubling someone.

So intelligent “conversational probing” only exist in the future.

The more immediate and pressing problem is “Consumer Expectation (of AI) versus AI’s actual ability”. If you create a conversational interface, then people will want to converse with it. The benchmark is, “so it needs to be like talking to a friend or a shop assistant”. The user quickly realizes “ok nope”, so how close can you get to talking naturally…. “ok nope”. Sadly, when your conversational interface improves, it all that does is immediately raise the users expectation to another level, which you again fail at.

Consumer Expectation versus AI Ability

However, what I think will happen (as in the graph below), is that people will create get used to talking to AI’s in a sort of “sign language” (respecting the limitations of the AI) and the expectation will drop off. However, in the mean-time AI will exponentially improve. So, I think we will be in that AI Iceberg situation again as the conversational interfaces we interact with get far smarter than we realize (Google Search being a great example of this).

So when will a robot be able to say “my bad”….

The “my bad” scenario, in chatbot talk, is where the bot insists on pursuing a conversational path, even though the user/customer is starting to get frustrated that this is off-topic and not meeting their needs “Ok Google stop no! no! you aren’t understanding me!!!”.

So “my bad” is the bot working out that the route it was taking the customer was the wrong one, realizes this, wryly apologies and resets the situation.

There will be a real market place for such services — the “my bad” understanding service, the “small talk” detection service etc…

In fact “small talk” detection already exists here https://api.ai/docs/reference/small-talk and it is very helpful. Ok, so you want to talk about the weather?, lets do that for a bit, before I try and sell you some more shoes!

David Wright is a Conversational AI Researcher at Hello Ara

--

--

David Wright
David Wright

Written by David Wright

David Wright is a Conversational AI Researcher at http://helloara.io

Responses (1)