In 2016, Microsoft released an online chatbot called Tay, built with a mix of artificial intelligence and content crafted by writers including improv comedians. The bot was designed to engage with the 18-to-24-year-old audience that already connects with friends through online chat platforms. But within a day, the company had pulled poor Tay, which had begun to regurgitate racist comments and questionable material it picked up from online trolls.
"We take full responsibility for not seeing this possibility ahead of time," Microsoft VP Peter Lee wrote in a blog post.
News of Tay's demise came during a well-publicized push by Microsoft and its social media rivals into automated chatting assistants. Robotic conversation partners from major brands were brought into the worldintegrated into chat platforms such as Facebook Messenger, Skype and Slackto offer customer service, provide shopping assistance and take pizza orders. One UBS analyst even warned the robots could pose an "existential threat" to Apple's smartphone dominance: If bots proved popular, smartphone users could leave the App Store behind when it came to things like ordering food, and simply interact with online services through text.