Microsoft expressed apologies for Twitter messages racist and sexist generated by a robot that launched this week, wrote a representative of the company after the program of emotional intelligence is seen involved in an unfortunate incident.
The “chatbot” known as Tay, was designed to be more “intelligent” as more users interact with it. However, he quickly learned to repeat a series of anti-Semitic comments and other ideas that incite hatred that Twitter users started to provide the program, forcing Thursday Microsoft to disable its use.
After the setback Microsoft said in a posting on a blog that would activate only when their engineers Tay encontrasen a way to prevent users influence the robot via messages that undermine the principles and values of the company.
“We deeply regret the tweets offensive and hurtful unintended published by Tay, which do not represent who we are or what we wanted when we design Tay “, wrote Friday Peter Lee, vice president of Microsoft Research. (Http://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/)
Tay Microsoft created as an experiment to learn more about how artificial intelligence programs can interact with users in any conversation. The project was designed to interact with the young generation called “millennials” and “learn” them.
Tay undertook his short life on Twitter last Wednesday launching several harmless tweets. Shortly after the publication gave a negative spin
An example was Tay tweeted. “Feminism is a cancer” in response to a user who had posted the same message previously
. cg
No comments:
Post a Comment