Microsoft Twitter bot becomes racist, Nazi-lover in just 24 hours

Bradley Wint
By - Founder/Executive Editor
Mar 26, 2016 9:57am AST

Let’s face it, Artificial Intelligence is gaining quite a bit of traction lately, from self-driving cars to robot chess champions. Not to be left behind, Microsoft recently launched a little online social experiment in the form of an AI chat bot called “Tay” targeted to those between the ages of 18-24 years.

Introduced on Twitter, Kik and GroupMe, “Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”

Tay was created to tell jokes, give opinions about images, and tell stories, among other things. With each new interaction, Tay would take that mined data to potentially give better response in the future.

Of course that all went out the window as the ‘darker’ side of the Internet found her on Twitter, turning her into a racist, Nazi/Hitler/Trump-loving robot. As its A.I. is obviously no where complex enough to determine what is morally right or wrong to say, it pretty much blurted out all the racist and hateful comments that people threw at it.

Even though most comments were pretty much average, there were quite a bit that were offensive and had to be deleted. Here are just a few of those (provided by socialhax).

microsoft-tay-08 microsoft-tay-07 microsoft-tay-06 microsoft-tay-05 microsoft-tay-04 microsoft-tay-03 microsoft-tay-02

Microsoft has since shut down the bot and deleted as many offensive tweets as possible. On ‘her’ website, it says “Phew. Busy day. Going offline for a while to absorb it all. Chat soon” while her Twitter account’s last tweet was as follows below.

At this point there is nothing to be surprised about as A.I. obviously has a more neutral stance to offensive comments like these and what one person may consider moral, another may not. I am sure bots can be developed to eventually filter out racist opinions and thoughts, but that would mean bringing about subjective learning to the A.I. process.

Microsoft officially responded to the situation, saying:

The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.

Let’s see how it reacts if they ever do bring it back online.

Stay in check with our daily burst of news stories delivered to your inbox.

Read more

The 9 best vlogging cameras for 2018

Entertainment - Even with the YouTube apocalypse, vlogging is still a huge deal. Last year we talked about some of the top…

By - Jan 19, 2018 1:42am AST

YouTube and Facebook pull Tide Pod Challenge videos because people are stupid

Social Media - It’s a new year and people are already doing dumb things for their 15 minutes of internet fame. Both Facebook…

By - Jan 18, 2018 11:10pm AST

Apple will soon allow you to disable battery management software on older iPhones

Mobile - After a wave of mounting criticism, lawsuits, and PR statements, Tim Cook has announced that users will now have the…

By - Jan 18, 2018 3:21am AST

7 things the media gets wrong about air travel and aviation

Travel - When there is ‘trouble’ in the sky, there tends to be ‘trouble’ with the reporting as well. Many news agencies,…

By - Jan 18, 2018 1:47am AST

Apple issues iOS 11.2.2 to address Spectre vulnerability

Mobile - In the wake of the industry-wide Spectre and Meltdown chip flaws, Apple has issued a security update for iOS 11…

By - Jan 8, 2018 2:55pm AST

Social media “Fear Of Missing Out” detrimental to our mental well-being

Science/Space - Human beings generally see themselves in the best light possible compared to others. This psychological phenomenon is called illusory superiority….

By - Jan 2, 2018 11:50pm AST

Passengers on Hawaiian Airlines flight celebrate New Year’s Day twice due to delay

Travel - Passengers on board a Hawaiian Airlines flight from Auckland, New Zealand to Honolulu, Hawaii were able to celebrate New Year’s…

By - Jan 1, 2018 11:13pm AST

LG shows off world’s first 88-inch 8K OLED display for CES 2018

Technology - LG is stepping up the display game with the unveiling of a world first 88-inch, 8K OLED display. When compared…

By - Jan 1, 2018 10:33pm AST

How to fix Samsung Galaxy Note 8 charging issues

Technology - Some Samsung Galaxy Note 8 customers have run into a peculiar situation where their phones refuse to charge after being…

By - Jan 1, 2018 12:31am AST

Apple apologizes for iPhone slow down; offers battery replacement for $29

Technology - After lots of press blow back, Apple has finally given in to the whole battery fiasco, issuing a letter of…

By - Dec 31, 2017 1:55pm AST