A new technology has now captured the imagination of CX directors everywhere – chatbots.
Their promise is attractive. They can be used to automate the way in which customers interact with brands, meaning that huge costs of human labour can be avoided. Of course, the movement of customer service to online channels is nothing new. A McKinsey survey from 2015 estimated that digital-care channels (e.g. web chat, social media, and email) accounted for 30 percent of customer-care interactions and that by 2020 it is expected to grow to 48 percent.
The difference with chatbots is the way they use AI and machine learning techniques to provide meaningful responses at scale.
China provides good use cases for this with 600m people using they messaging app WeChat to do a variety of tasks from booking doctor’s appointments to paying utility bills. And in 2016, KLM was one of the first to launch a chatbot for Facebook Messenger. The service enables KLM flyers to automatically receive their itinerary, flight updates, check-in notifications, get their boarding passes and even rebook flights all from one thread within Facebook Messenger.
These sorts of examples support heady projections with Gartner estimating that more than 85% of customer interactions will be managed without a human by 2020. But perhaps the real excitement of chatbots versus other forms of digital-care, is the seeming potential for them to offer an almost human-like form of engagement.
Chatbots are, well, chatty. Their responses can be crafted to offer a personable style of communicating. Sometimes, perhaps a little too personable, as some brands have found to their cost. Coca-Cola’s automated #MakeItHappy campaign in 2015 was suspended after it was tricked into tweeting lines from Adolf Hitler’s “Mein Kampf”. Similarly, in 2016 Microsoft’s AI chatbot “Tay” was taken offline within hours of launching for posting racist and genocidal tweets.
Nevertheless, our desire to engage with machines in a human way has long been understood by psychologists and has been dubbed ‘The Eliza Effect’ after a computer programme named Eliza, developed by MIT computer scientist Joseph Weizenbaum. The programme was designed to reflect a psychotherapist, largely by rephrasing the patient’s replies as questions.
Weizenbaum was famously surprised by the enthusiasm of his secretary for interacting with Eliza, despite her knowing it to be a computer programme. He considered that this reflected a ‘powerful delusional thinking in quite normal people’.
Of course, this apparent tendency to treat computers in a human-like way has considerable potential benefits for companies, as it suggests the promise of being able to offer emotional engagement as well as efficient, low-cost servicing.
But is it this simple?
Writer and commentator Sherry Tukle has explored the way that people engage with technology such as Eliza and concludes that users are well aware of their limitations. As was the case with Tay, some people embark on an all-out effort to trick and expose them as ‘mere machines’. Others will happily collude in the delusion of the machine as lifelike, going out of their way to ensure the questions they asked would not ‘confuse it’.
We need to be careful with the idea that Chatbots can offer emotional engagement Therefore, we need to be careful with the idea that Chatbots can offer emotional engagement. An onlooker may assume that the way a user communicates with a machine is emotionally engaging – but the user themselves may not feel that way. The reality is a more complex set of interactions that may still have value, just not the value that we think.
Another difficulty perhaps comes when customers don’t know if the text they are seeing is coming from a machine or a human. The expectations and norms that we all bring to a dialogue with a computer are likely very different to the ones we bring to other humans. For many customer service interactions, there is also no guarantee that we are interacting with bots when we even think we are. Microsoft researchers Mary L. Gray and Siddharth Suri recently pointed out that “Much of the crowd-work done on contract today covers for AI when it can’t do something on its own… real live human beings clean up much of the web, behind the scenes.”
The problem is that this lack of clarity could bring unintended consequences for brands relating to the notion of ‘uncanny valley’. This was a term first used in 1970 by Japanese roboticist Masahiro Mori, who noted that although we tend to warm to robots that have some human features, we tend to be disconcerted by them if they start becoming too realistic.
The uncanny valley effect has been blamed for the failure of a number of films that used CGI where the characters have been very human like while the audience are aware that they are in fact animations. The movie ‘Polar Express’ is often cited as an example where the effect left it with lacklustre box office sales. Research I have conducted indicates that this uncanny feeling is bad news for brands as it creates a sense of distance, the exact opposite of what was intended. Chatbots are in danger of exacerbating this very issue.
A final complexity for brands and their relationship with customers is the way in which customer data is increasingly used to personalise their experience. Brands are developing psychological profiles of consumers based on their online data, whether social media or transaction data. And of course, while chatbots offer the promise of the delivering this personalisation at scale, it’s hard to know how to respond to a bot that appears to know something about your personality.
As commentator Sarah Watson points out, ‘We don’t often get to ask our machines, “What makes you think that about me?”.’ The desire to create intimacy and close relationships can work if done carefully and openly but done badly can feel creepy and intrusive.
The technology of customer experience is currently racing ahead of our understanding of the consumer response. As we each know from our daily lives, it is all too easy to look at an interaction where superficially all is well but still have an uncomfortable feeling that something is not quite right. Companies recognise the importance of emotional connection to drive long-term relationships. But quite whether chatbots can help deliver this remains to be seen.