Over the years, customer service has undergone a dramatic transformation, driven by rapid advancements in technology. A sector that once relied on phone + Read More
It’s live! Access exclusive 2024 live chat benchmark data & see how well your team is performing.
Get the dataHey Siri, Alexa, and OK Google have all become common commands in the emerging age of intelligent assistants. In less than five years, asking a chatbot to tell you the weather forecast or buy your groceries has transformed from something nobody thought possible to an everyday occurrence.
Chatbot advancements over the years aren’t limited to just Google Home being able to turn your lights on and off (although that has certainly given Clappers a run for their money). They’ve become smarter, faster, and more useful. Done right, they hold the potential for more natural customer experiences and greater organizational efficiencies by handling and resolving more queries without increasing your operational costs.
Unfortunately, with all the hype of artificial intelligence as the be-all, end-all solution to your customer service pains, people rushed in too quickly without thinking about how to properly implement it. Poor execution hinders them from taking advantage of the benefits that can be had here and now – leaving some chatbots’ present-day performance much to be desired.
Here are 10 examples of chatbot fails and how they can be set straight.
One of the most frustrating experiences is when you call up an organization and their IVR system runs you around in circles. This is the chatbot equivalent. It’s especially frustrating because unlike an IVR where you can normally press “0” to jump the canned greetings to a representative, chatbots adhere to a predefined script that doesn’t let you veer off course as easily. While extremely useful for many use cases, chatbots will just annoy customers who want to speak to a human but aren’t given the option.
How to avoid this:
No matter what type of chatbot you have, you should always have the option to transfer to an agent at any point in the conversation. This can be achieved in many ways, such as building an intent that gives the option to escalate when a visitor types “speak to an agent/human/person”. You can also by-pass a chatbot altogether if you identify a VIP customer.
Poncho was a weather bot that was supposed to allow users to look up the weather based on their location data. Unfortunately, while it’s no longer active, it fell a little short. Poncho was only able to respond based on a very specific subset of keywords. Rather than recognizing “weekend”, it only recognized days of the week like “Saturday” and “Sunday”. This is a classic case of a chatbot being programmed for one scenario without thinking how people actually ask questions in conversations.
How to avoid this:
This is where a chatbot’s Natural Language Processing (NLP) capabilities really matter. Implementing a bot that has NLP will mitigate this situation, as it’ll understand users no matter how a question is asked. If you choose to go with a non-NLP bot, then stick to buttons and remove the ability to type to avoid these types of situations.
You may remember Tay, Microsoft’s disastrous machine learning bot from 2016. Tay was deployed on Twitter and claimed to be able to “learn” conversational speech from interfacing with users’ tweets. She was taken down 16 hours later after she went on a raging racist, homophobic rampage from Twitter trolls who were purposely “teaching” her inappropriate phrases and ideologies.
How to avoid this:
For some reason when we learn that we’re talking to a chatbot, our first instinct is to have some fun with it. This could have been mitigated if Microsoft had applied a filter that directed her to ignore inappropriate interactions. Bottom line? Don’t set the bot into the wild before it’s ready, be considerate of scenarios and create a risk mitigation plan.
At first glance, this chatbot seems perfectly normal. But look a little closer and you’ll notice that all the options offered are external links directing you away from the chatbot. When the greeting message deflects the user from using the chatbot, the opportunity to engage the customer is lost. It becomes nothing more than a navigation tool, instead of the friendly personalized helper it can be.
How to avoid this:
Only include external links to pages in responses where a user explicitly asks for it. If you want users to actually interface with your chatbot, don’t create pathways away from the bot.
Customers want service in the channels they prefer, on their own terms. As such, channel pivoting – which is when a customer contacts you and is then directed to another channel – is a sure-fire way to undermine customer satisfaction and loyalty. Customers contact you via chat because they want answers to their questions right away, not because they want to be spun around in circles.
How to avoid this:
When it comes to customer service, you need to go all in. While you do need to think about the resource implications on your business, any customer-centric company knows that the quality of service customers receive is more important than the quantity of service channels they offer. Either offer the option to transfer to a human agent or initiate the channel transfer for the customer through a form instead.
Without asking me any qualification questions or confirming my interest, this chatbot sent me “tips” relating to Facebook Messenger marketing automation. Proactive chat, just like chatbots, has the potential to help the right customers at the right time when they need assistance. Unfortunately, when chatbots send these clearly unpersonalized, unfriendly, and unhelpful chat invitations they can quickly turn into an annoyance. In the case of this one, my conversation ended when I asked to talk to an agent. I had left the chat but it seems this chatbot decided that it wasn’t done with me.
How to avoid this:
Your chatbot’s proactive chat invitation criteria should be clearly defined and sent to only those that have met those criteria. Create specific use cases where it makes sense (like a visitor spending more than one minute on a page) and customize your invitation wording based on those scenarios.
When done correctly, chatbots offer personalized responses to specific customer queries. The mistake this chatbot makes is trying to do too much at one time, bombarding the customer with information. This can not only confuse the customer but also cause them to get frustrated trying to sort through all the responses that pop up.
How to avoid this:
Always define your intents as narrowly as possible so your chatbot responds with the appropriate answer. The rule is one answer for one query. While this may take more upfront effort, it will result in a significantly better customer experience.
While some of Siri’s responses are meant to be funny (ask her what 0 divided by 0 is), other times she can be downright frustrating. I tried this interaction several more times and no matter how I phrased it, there was no way to tell her to “exit” Apple’s note command.
How to avoid this:
Commands to execute certain actions should only be activated when a customer explicitly says so and just as easily be deactivated. In this instance, unless a customer says “open notes” or “note this down for me”, the note command should not have been activated. Chatbots should also recognize a deactivation intent through contextual NLP like “that’s not what I meant”, “cancel this note”, or “exit Notes”.
While the catchy slogan “set it and forget it” may work for slow cookers, instant pots, and PVRs, it does not apply to chatbots. Activating a chatbot without building a functioning interaction workflow sends customers into dead-ends, creating frustration and confusion.
In this case we can see that an out of the box Facebook Messenger bot was deployed through the default “Get Started” button, but a workflow wasn’t created as there was no quick reply, no auto reply, and no additional prompt.
How to avoid this:
Don’t activate a bot thinking it’ll just take care of itself. Have a fully fleshed out use case plan and purpose for the bot, setting a regular cadence to review and revisit the bot for improvements, or “tuning”.
Finally, we come to the biggest fail of them all. With technology, sometimes for one reason or another it just doesn’t work. Whether it’s a back-end issue, a software issue or otherwise, all a frustrated customer sees a non-functional chatbot.
The difference between #9 and #10 is while #9 has no interaction workflow at all, this one has commands that don’t work. Unlike #9 that has an out of the box Facebook Messenger Bot “Get Started” button, this Facebook Messenger Bot has a custom programmed instruction prompt. This custom feature is shown above not to be functional.
How to avoid this:
Test it, then test it again, and then again. Don’t roll out your chatbot unless you’re confident in its ability to interface with users predictably and consistently. Test it internally first, then bring in somebody outside your department for a field test, then deploy it publicly in a controlled, limited-use environment. You will discover bugs and improvement ideas every step along the way, and you will minimize the risk of poor real-world interactions.
The bottom line: chatbots are only as useful as they’re programmed and trained to be. Bots have the potential to be a powerful customer service tool, but only when they’re done correctly. Keep these tips in mind and make sure you have a fully fleshed out deployment and maintenance plan. Chatbots can be easy when you have a vendor partner that makes sure you don’t go it alone.