Microsoft Twitter bot becomes racist, Nazi-lover in just 24 hours

Mar 26, 2016 - 9:57am AST

Let’s face it, Artificial Intelligence is gaining quite a bit of traction lately, from self-driving cars to robot chess champions. Not to be left behind, Microsoft recently launched a little online social experiment in the form of an AI chat bot called “Tay” targeted to those between the ages of 18-24 years.

Introduced on Twitter, Kik and GroupMe, “Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”

Tay was created to tell jokes, give opinions about images, and tell stories, among other things. With each new interaction, Tay would take that mined data to potentially give better response in the future.

Of course that all went out the window as the ‘darker’ side of the Internet found her on Twitter, turning her into a racist, Nazi/Hitler/Trump-loving robot. As its A.I. is obviously no where complex enough to determine what is morally right or wrong to say, it pretty much blurted out all the racist and hateful comments that people threw at it.

Get your daily tech burst in your inbox!

Even though most comments were pretty much average, there were quite a bit that were offensive and had to be deleted. Here are just a few of those (provided by socialhax).

microsoft-tay-07 microsoft-tay-06 microsoft-tay-05 microsoft-tay-04 microsoft-tay-03 microsoft-tay-02 microsoft-tay-01

Microsoft has since shut down the bot and deleted as many offensive tweets as possible. On ‘her’ website, it says “Phew. Busy day. Going offline for a while to absorb it all. Chat soon” while her Twitter account’s last tweet was as follows below.

At this point there is nothing to be surprised about as A.I. obviously has a more neutral stance to offensive comments like these and what one person may consider moral, another may not. I am sure bots can be developed to eventually filter out racist opinions and thoughts, but that would mean bringing about subjective learning to the A.I. process.

Microsoft officially responded to the situation, saying:

The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.

Let’s see how it reacts if they ever do bring it back online.

Stay in the know

Try Modern is a blog about the latest tech and web trends. Get a daily shot of hot tech and web stories delivered to your inbox.

Have your say