Microsoft Twitter bot becomes racist, Nazi-lover in just 24 hours

Bradley Wint
Mar 26, 2016 9:57am AST
Photo:

Let’s face it, Artificial Intelligence is gaining quite a bit of traction lately, from self-driving cars to robot chess champions. Not to be left behind, Microsoft recently launched a little online social experiment in the form of an AI chat bot called “Tay” targeted to those between the ages of 18-24 years.

Introduced on Twitter, Kik and GroupMe, “Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”

Tay was created to tell jokes, give opinions about images, and tell stories, among other things. With each new interaction, Tay would take that mined data to potentially give better response in the future.

Of course that all went out the window as the ‘darker’ side of the Internet found her on Twitter, turning her into a racist, Nazi/Hitler/Trump-loving robot. As its A.I. is obviously no where complex enough to determine what is morally right or wrong to say, it pretty much blurted out all the racist and hateful comments that people threw at it.

Even though most comments were pretty much average, there were quite a bit that were offensive and had to be deleted. Here are just a few of those (provided by socialhax).

microsoft-tay-08
Get your daily tech burst in your inbox!
microsoft-tay-07 microsoft-tay-06 microsoft-tay-05 microsoft-tay-04 microsoft-tay-03 microsoft-tay-02 microsoft-tay-01

Microsoft has since shut down the bot and deleted as many offensive tweets as possible. On ‘her’ website, it says “Phew. Busy day. Going offline for a while to absorb it all. Chat soon” while her Twitter account’s last tweet was as follows below.

At this point there is nothing to be surprised about as A.I. obviously has a more neutral stance to offensive comments like these and what one person may consider moral, another may not. I am sure bots can be developed to eventually filter out racist opinions and thoughts, but that would mean bringing about subjective learning to the A.I. process.

Microsoft officially responded to the situation, saying:

The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.

Let’s see how it reacts if they ever do bring it back online.

Stay in the know

Subscribe to the Try Modern Tech Daily Digest for the latest tech news stories, deals, and how-to's in your inbox!

Founder/Executive Editor
PGP Fingerprint: EF2C 9B80 085C C837 3DA3 995D A864 F801 147F E619 | PGP Key
More From Technology

You can pre-order your gold-plated iPhone X starting at $7,495, with the top model costing $70k

By - Sep 11, 2017 11:06pm AST
With the iPhone X and 8 set to be announced on the 12th, iPhone accessory manufacturers are already busy at work putting the final touches on their cases and other… Continue Reading

Half of U.S. population’s data exposed in huge Equifax data breach

By - Sep 8, 2017 12:39am AST
Equifax, a US-based credit reporting agency, has confirmed that sensitive consumer data belonging to over 143 million customers was compromised earlier this year. According to the official press release, hackers… Continue Reading

YouTube-MP3.org closes under legal pressure

By - Sep 6, 2017 11:42pm AST
Popular stream ripping site YouTube-MP3.org, will finally close its doors after being slammed with a legal complaints by 15 of the top global record labels. The site which allows you… Continue Reading