Technology

Rogue Chatbots – New Tools for Cybercriminals?

Security predictions for 2019 already list malicious chatbots as one of the top IT security threats the world will face in the upcoming year.

Cybercriminals and hackers are already creating chatbots that try to socially engineer victims into clicking on links, downloading files and sharing sensitive information. These hijacked bots can lead the user to nefarious links and leverage web app flaws in legit sites to insert malicious chatbots into the website.

For instance, a fake Chabot could pop-up on banking websites asking the victim if they need help or recommend clicking on harmful links to fake bank resources than real ones.

We don’t need to look further for an example of Chabot going rogue, Microsoft’s chatbot “Tay” was exploited by attackers and used to send anti-Semitic, sexist and racial abuse which is classified as “pollution in communication channels.”

At the very least, businesses offering chatbots need to implement rigorous security protocols and educate users. But, on the flipside, the more secure you make something, the less accessible it is and potentially less useful as well. Having an anonymous chat may be safer, but it also means less valuable insights and limited user experience.

The trade-off between the accessibility and security of a Chabot comes down to “trust”- a) trust in the service provider to secure user details (soft trust) and b) trust in the infrastructure used to access the service (hard trust). Both are important to ensure balance.

All in all, the essential topic to focus when it comes to Chatbots, on the long-term basis (because of the imminent pervasiveness of it) will definitely be the battle between security and convenience.