Customer communication is evolving faster than ever. Where paper-based communications sent through the mail were once the norm, almost everything is now done electronically across a range of channels. Two technologies that are set to have a particularly big impact going forward are chatbots and machine learning.
Chatbots are, simply put, pieces of software designed to mimic human interaction. Over the past few years, they’ve become increasingly sophisticated, moving beyond their early use cases.
Not that long ago, you might have been able to ask for a recipe or get basic customer support and that was about it.
Today, chatbots can do incredibly innovative things across a number of fields, including financial services.
Bank of America’s Erica, for example, can send notifications to customers, provide balance information, suggest how to save money, provide credit report updates, pay bills and help customers with simple transactions.
Locally, Absa now allows its customers to get their bank balance, buy electricity, purchase airtime and data, and also pay beneficiaries via WhatsApp.
While some of these functions are strictly transactional, others fall firmly within the sphere of customer communications.
This only becomes more apparent when you consider the potential use of chatbots for marketing and informing customers about product updates.
Unlike traditional forms of customer communication, chatbots allow customers to receive communication when they want and for organisations to send them out immediately.
When integrated with technologies like machine learning, chatbots can also be used to ensure that customers only receive communications which are relevant to them, thereby avoiding communication fatigue.
As promising as chatbots are, it’s imperative that organisations deploying them do so safely.
If an organisation develops its chatbot in-house, at the very least it needs to comply with the same safety standards as communications sent out via email and mobile. While this is the more difficult route, it may be the best solution for organisations that are confident in their developmental abilities.
The simpler route is to use third-party apps like Facebook and WhatsApp. Not only are these apps simpler to build, the organisations behind them also have massive amounts of resources to put into security. It’s worth bearing in mind, however, that they need those resources because they’re so big and are therefore serious targets for malicious actors.
Ultimately, organisations need to assess their own capabilities and make the choice most likely to be in the best interests of their customers.
Education is still critical
While chatbots and their attendant technologies might seem new and exciting, the general safety protocols around them remain the same.
No matter what platform they’re delivered on, electronic communications have potentially serious security risks that must be mitigated on both the business and consumer side.
While the AI component of chatbots may make breaches as a result of employee error less of a worry, the emerging nature of the technology makes education especially key for consumers.
Organisations need to make concerted efforts to ensure that customers know what information they can safely share with a chatbot, what their official chatbots look like, and what they’ll never ask for.
As with phishing scams, it’s also imperative for organisations to inform customers about any chatbot scams that crop up.
Balancing opportunity and risk
There is no doubt that chatbots and the technologies enabling them will play an increasingly important role in the future of customer communications.
It’s important, however, to balance this opportunity with the attendant risks and put as much effort as possible into ensuring that customers can use them safely.
This article was first published by Bizcommunity on 20 September 2018.