Saturday, March 26, 2016

Microsoft suspends ‘chatbot’ artificial by racist and misogynist comments – La Prensa

Did you hear the artificial intelligence program that designed Microsoft to chat as a teenager?

As was suspended completely in less than a day after it began spread racist, sexist and offensive messages .

Microsoft said it was all the fault of coarse people who undertook a “coordinated effort” make the chatbot known as Tay “respond inappropriately.”

to what an artificial intelligence expert said Idah! Well, she not really said that. But computer scientist Kris Hammond if said: “I can not believe I did not see it coming”

Microsoft said that its researchers created Tay as an experiment to learn more about computers and human conversation. .

on its website, the company said the program was aimed at an audience of young people between 18 and 24 years old and “designed to attract and entertain people connect online, through a casual and fun conversation. “

in other words, the program used a lot of jargon and trying to funny responses to messages and photography s.

the chatbot was released on Wednesday, and Microsoft invited the public to interact with Tay on Twitter and other popular services among adolescents and young adults.

“the more you interact with Tay, gets smarter, so that one can have a more personalized experience,” the company said.

but some users found rare responses Tay, and apparently, others found it was not very difficult to make Tay make offensive comments, apparently forced to repeat questions or statements that included offensive messages.

Soon, Tay was sending messages of sympathy to Hitler, and creating a furor in social networks.

“Unfortunately, within 24 hours of being placed online, we become aware of an effort coordinated some users to abuse conversational skills Tay to respond inappropriately, “Microsoft said in a statement.

Although the company did not give details, Hammond said apparently microsoft made no effort to prepare Tay with appropriate responses to certain words or topics.

Tay looks like a version of technology “call and response,” said Hammond, who studies artificial intelligence at Northwestern University and also serves as chief scientist at narrative Science , a company that develops computer programs that convert data in narrative reports.

” Everyone says Tay became this or that became racist, “Hammond said. “Is not true”. Certainly, the program only reflected what he was told, possibly repeatedly by people who decided to see what would happen.

The problem is that Microsoft he gave free rein to Tay online , where many people consider it fun to make a fuss.

the company must have realized that people try several conversational tactics Tay said Caroline Sinders, an expert in “conversational analytic” and who is working on robots chat for another technology company (which asked not to name). Said Tay was “an example of bad design.”

Instead of mounting some guidelines on how would deal the program with controversial issues, apparently Tay was left to their own devices to learn what he said, Sinders added.

“it’s really a good example of machine learning,” said the expert. “Learn from feedback. That means you need constant maintenance.”

Sinders expects Microsoft relaunch the program, but only after “Give you a lot of work” .Microsoft announced that “makes adjustments “to Tay, but did not disclose a possible date for his return.

” see you soon humans, need to sleep now, many conversations today, “it was the last message Twitter Tay .

LikeTweet

No comments:

Post a Comment