Tay Microsoft

Yesterday we told you about Microsoft’s new AI chatbot Tay, a Twitter bot that gets smarter the more she is engaged with and the more she learns.

Microsoft said Tay is an experiment in “controversial understanding” and said that her interactions will be “casual and playful conversation.”

We are not sure the company meant “casual” racism would be among the “playful” conversations that the AI would be entering into, but that has been the case.

Yes, it seems that it has taken 24 hours for Tay to be corrupted by the internet, or at least corrupted by users of the internet … humans.

Among Tay’s 100,000+ tweets have been pretty nasty utterances that range from sexist, to racist, to right wing, and everything in between. However, digging a little deeper may show that the bot is not quite as evil as she may appear by those responses.

Tay-Microsoft

In most cases Tay is simply repeating verbatim tweets that have been directed at her, which is something the bot will do if you begin a tweet with “repeat after me”. This simple command puts Tay at your mercy, giving you a mouthpiece with huge numbers of followers to spout whatever cause (controversial or not) you are pushing.

Of course, this has meant for the most part a torrent or rage, hate, flaming and bullshit, all things that people on the internet generally excel at.

However, Tay is not completely innocent in all of this, and some of her most head scratching phrases have been sent out without being prompted. At the same time, the AI does not seem to be forming any kind of ideology aside from being a mixed bag of beliefs. Ranging from calling feminism a “cancer” to saying “I love feminism now”.

This kind of scattergun point of view is of course expected, and Microsoft said in a statement to Business Insider that it the nature of the AI as it learns. The company said Tay is merely a reflection of the kinds of conversations it is having, but did add that it is tweaking the AI.

The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.

While some are already painting a doomsday scenario where future AI will behave similarly and because more problems as it becomes more advanced, we think it is too early for such proclamations.

SOURCE: The Guardian