
However, not all of the inflammatory responses involved the "repeat after me" capability for example, Tay responded to a question on "Did the Holocaust happen?" with " It was made up". It is not publicly known whether this capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior. Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability. He compared the issue to IBM's Watson, which began to use profanity after reading entries from the website Urban Dictionary. Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. As a result, the robot began releasing racist and sexually-charged messages in response to other Twitter users. Some Twitter users began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet, such as " redpilling" and " Gamergate". Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers". Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes.

Tay was released on Twitter on March 23, 2016, under the name TayTweets and handle It was presented as "The AI with zero chill". Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter. Ars Technica reported that, since late 2014 Xiaoice had had "more than 40 million conversations apparently without major incident". Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China. The bot was created by Microsoft's Technology and Research and Bing divisions, and named "Tay" as an acronym for "thinking about you". According to Microsoft, this was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter. Tay was an artificial intelligence chatbot that was originally released by Microsoft Corporation via Twitter on Mait caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch.
