Share
Share
Share
 

Microsoft Twitter bot becomes racist, Nazi-lover in just 24 hours

Photo:

Let’s face it, Artificial Intelligence is gaining quite a bit of traction lately, from self-driving cars to robot chess champions. Not to be left behind, Microsoft recently launched a little online social experiment in the form of an AI chat bot called “Tay” targeted to those between the ages of 18-24 years.

Introduced on Twitter, Kik and GroupMe, “Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”

Tay was created to tell jokes, give opinions about images, and tell stories, among other things. With each new interaction, Tay would take that mined data to potentially give better response in the future.

Of course that all went out the window as the ‘darker’ side of the Internet found her on Twitter, turning her into a racist, Nazi/Hitler/Trump-loving robot. As its A.I. is obviously no where complex enough to determine what is morally right or wrong to say, it pretty much blurted out all the racist and hateful comments that people threw at it.

Even though most comments were pretty much average, there were quite a bit that were offensive and had to be deleted. Here are just a few of those (provided by socialhax).

microsoft-tay-08 microsoft-tay-07 microsoft-tay-06 microsoft-tay-05 microsoft-tay-04 microsoft-tay-03 microsoft-tay-02
microsoft-tay-01

Microsoft has since shut down the bot and deleted as many offensive tweets as possible. On ‘her’ website, it says “Phew. Busy day. Going offline for a while to absorb it all. Chat soon” while her Twitter account’s last tweet was as follows below.

At this point there is nothing to be surprised about as A.I. obviously has a more neutral stance to offensive comments like these and what one person may consider moral, another may not. I am sure bots can be developed to eventually filter out racist opinions and thoughts, but that would mean bringing about subjective learning to the A.I. process.

Microsoft officially responded to the situation, saying:

The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.

Let’s see how it reacts if they ever do bring it back online.

Caribbean Airlines has joined the ranks of other international airlines by introducing a premium economy cabin. Dubbed “Caribbean Plus”, rows…
It’s 2018 and there are still many websites that believe in forcing users to watch autoplay videos. That’s right, we’re…
Sometimes I like to record a snippet of what I’m listening to on my phone’s iTunes player to post to…
If you’re an iPhone user (or use any iOS device as a matter), it may be time to consider using…
Like many other mobile phone manufacturers, Apple can’t keep anything a secret. In a recently leaked internal memo (a lengthy…
We’ve all seen endless videos about those luxurious and comfortable first and business class seats on airlines like Emirates, EVA…
According to a new survey by Piper Jaffray, a securities investment and research firm, more and more U.S. teens are…
With both Xbox and PlayStation consoles receiving receiving proper 4K treatment within the last two years, Sony fans may have…
World renowned physicist Professor Stephen Hawking has passed away at the age of 76. A spokesperson for the family said…
Adobe Creative Cloud (CC) subscribers (including myself) have reported receiving emails about an upcoming price change coming to the editing…