Chatbots Are All the Rage—and Something of a Risk

While much of the hype around artificial intelligence (AI) is based on possibilities for a decade or so from now, chatbots are a tangible manifestation of AI that’s right here, right now.

Today, many top brands—from 1-800 Flowers and Whole Foods to Starbucks—are employing chatbots. A recent Oracle survey also found that 80 percent of businesses say they plan to use chatbots by 2020. Such businesses are driven by the promise of improving the customer experience and using automation to cut costs.

Chatbots are also easy to create. A company called Dexter, for instance, uses a WordPress-like interface that lets anyone create a chatbot in a few minutes. Such bots can be published to Facebook Messenger and Slack.

Unfortunately, chatbots have a downside, too: They can present an enticing target for hackers. As consumers get more used to interacting with chatbots, there will be new opportunities for phishing, hacking, and general mischief.

Three types of attacks

Security experts have identified three types of potential chatbot attacks. So far, these are mostly theoretical since, no major chatbot hacks have been reported—yet.

1. “Man-in-the-middle” attacks. In this case, a chatbot is made to look like it’s from a reputable organization—but it’s not—so users feel comfortable providing sensitive information. “These type of corporate chatbots might tell clients to take certain actions, including offering up URLs to install a program to try to fix a problem,” said Paul Hill, a senior consultant with Systems Experts. “So, there’s definitely concerns about a man-in-the-middle attack causing someone to install malware.” Such fake chatbots might take phishing to a whole new level. Randy Abrams, a senior security analyst at Webroot, said that such attacks are likely to be so prevalent that the industry will soon be citing “chishing” as a new threat on par with SMisishing.

2. Pollution in the communications channel. In 2016, Microsoft released Tay, a chatbot that was designed to mimic and converse with users in real time. Shortly after its release, Internet trolls got hold of Tay and soon had it spewing racist, anti-Semitic, and, generally, awful invective. Hill said there’s a danger that a chatbot could be influenced to give out the wrong answers or, as in the case of Tay, say incredibly obnoxious things. “Anyone could launch a chatbot, introduce it into a group discussion, and have is disseminate false information,” said Hill. “Certainly, one example is so-called, ‘fake news.’” Hill said that chatbots normally parse assertions made in the communications channel. If enough users make the same statement, it assumes it’s true. “So, if 100 accounts wrote, ‘The sky is green,’ and someone asked, ‘What color is the sky?’ the bot would reply, ‘The sky is green,’” Hill suggested. Even worse, a chatbot could be trained to point users to URLs for malware sites.

3. The open computer problem.  Simon Bain, CEO of BOHH Labs, said that there’s a danger that an employee could lose a phone or computer or just leave it unlocked. If a chatbot window were open, then a hacker could just ask it for sensitive information. “So, if your phone or your computer is stolen, or you leave your computer unlocked while you go down to Starbucks, then there’s not a lot we can do about that,” he said.

Methods of protection

Bain said that encryption is the best protection for man-in-the-middle attacks. A message that is encrypted is scrambled and broken into parts across multiple networks. “In theory, it’s very, very difficult for someone to grab all the individual parts of the message,” Bain explained. “Each one of those bits, each one of those blocks, is linked to the endpoints concerned, and each one has its own unique key.” With encryption, only the two parties involved in the messaging can read it, even if it’s intercepted.

There are other safeguards, as well, including user-identity authentication, two-factor authentication, authentication timeouts, self-destructing messages, and biometric identification. Another common-sense approach is merely to limit the type of information that a chatbot can send.

As chatbot proponents point out, the technology on which they are based isn’t new—they use the same HTTPS protocols that we use on the web. The real chatbot security vulnerability, as is often the case, is with the human users.

“The standard phishing attack warnings apply,” said Abrams. “An attacker can start profiling a chatbot to find out what data can be retrieved from it and then determine how social engineering can be used via the chatbot.” For instance, if an employee’s account is hacked, the hacker could ask a coworker to grab information from the chatbot on that employee’s behalf.

Abrams said that training workers to be suspicious of chatbot communications—much that same as is done with emails—take some time.

“We haven’t done a great job of fine-tuning social engineering education,” said Abrams. “We have to start doing human patching.”