Tay, artificial intelligence designed by Microsoft to talk like a teenager, was suspended within 24 hours of release for making racist and misogynistic comments on social networks.
The Tay software better known as a “botchat” – he received the personality of a teenager designed to learn from online exchanges with real people. However, the plan did not work out as expected and without meaning to young learned negative things.
“Unfortunately, in the first 24 hours online, we realized a coordinated effort by some users abusing skills Tay and make Tay respond inappropriately, “said the company, which confirmed adjustments to the software.
” the more you interact with Tay, gets smarter, so that one can have a more personalized ‘experience’, the company said. But some users found rare Tay responses, and apparently, others found it was not very difficult to make offensive conduct, apparently forced to repeat questions or statements that included offensive messages. Soon, Tay was sending messages of sympathy to Hitler, and creating a furor in social networks.
Instead of mounting some guidelines on how would deal the program with controversial issues, apparently Tay was left to their own devices to learn what he said.
“see you soon humans, I need to sleep now, many conversations today,” was the last message Twitter Tay.
messages Tay ranged from supporting the Nazis and Donald Trump, to comments and insults to women and blacks. All offensive comments were deleted, but the network is full of screenshots where you can still see.
No comments:
Post a Comment