27. March 2018

Hello, is this a Bot?

Perhaps you know this phenomenon: you call someone and instead of the dial tone, you immediately hear a human voice.

While upon hearing “please leave a message” we used to be able to tell that it was only an answering machine, the difference between man and machine is becoming more and more difficult for us nowadays.

Many larger and smaller companies offer their customers a chat option either on their own websites, or more often than not also in social media. Usually the chat starts with a predefined greeting as soon as the chat window is opened. This is where the confusion begins: Will you have a conversation with a real person or are you already in the middle of conversing with these chatbots that you have heard about seemingly everywhere? However, this scenario is not as trivial as it seems. Because depending on whether the user thinks he is talking to a real person or a machine, the final revelation can have very different consequences.

That’s a human being! Or is it?

Many chatbots use keywords. This means that they choose a specific word in the user input and give a prefabricated answer that best fits that keyword. Since these chatbots hardly generate answers themselves, their human typed responses are often grammatically correct. If they also miss the topic a little bit, the user does not get suspicious immediately. We are familiar with this style of a more broad coverage of topics from years of interacting with human employees – which is why we have built up a certain tolerance for unprecise communication styles. At least when the third, very general – sometimes even totally oblivious to the topic – answer is given, the chat seems suspicious. Frustration and indignation arise, because it turns out the user has been talking to a chatbot when they have expected a real human being. In itself, this is nothing terrible if you clearly announce that the conversation will be held with a chatbot. Some customers may find such a surprise even amusing. However, depending on the company profile, this may not necessarily be the desired reaction of potential customers.

That’s a bot! Or is it?

Prefabricated answers actually come from human customer service. It is an art in itself to address customers with the same enthusiasm throughout the day. If customer support talks to users via chat, the production of standard greetings and answers at the push of a single button is very convenient. If the user has noticed that the texts are ready-made, for example due to the unusually fast response time, they could assume that they might be talking to a chatbot. From this point on, the inhibition threshold is much lower, but the fact that it is a person they have been talking to is even more unpleasant. It will not be long before the choice of answers from the customer support keyboard is no longer sufficient and therefore manually formulated messages have to be used. If it becomes subliminally clear that they have been talking to a person the entire time, the user could perceive this as deception. The feeling that you have been “secretly” observed by someone is not something you want to expose your customers to.

Honesty is the best policy

Although the experience is much more negative when the user has been talking to a person, despite having assumed a bot, the surprise of the vice versa situation is also not desirable. We note: users must always be made aware of who they are talking to.


News mail

Would you like to learn more?

Subscribe to our newsletter