Sunday, 27 March 2016

Microsoft shows how artificial our intelligence is

When the chatbot Microsoft Tay started tweeting offensive, the response from the company showed us how little we understand about human nature, both less artificial intelligence, writes David glance.
It was a nightmare than a week PR for Microsoft. It started with the head of Microsoft's Xbox division, Phil Spencer, having to apologize for light dancers clothing scantily clad dancers dressed school girls at a party hosted by Microsoft at the Conference of Game Developers (GDC) . He said the dancers who participated in this event "was absolutely not consistent or aligned with our values. This was unequivocally wrong and will not be tolerated."

The matter was handled internally and therefore not known who was responsible and why they might have thought this would be a good idea.

But things were much worse for Microsoft when a call chatterbot Tay started tweeting offensive apparently supporting, anti-feminist and racist views Nazis. The idea was that the artificial intelligence behind Tay could learn from others on Twitter and other social networks and appear as a person of 19 years it means female. What happened, however, is that the experience was kidnapped by a group of people in the (politically incorrect) "pol" notorious 4chan bulletin board and establishing training 8chan Tay say very inappropriate things.

This time it was down to Peter Lee, corporate vice president of Microsoft Research had to say: "We are deeply sorry for tweets offensive and hurtful unintentional Tay, who do not represent what we are or what we represent."

Tay was removed and removed the tweets, but not before some of the most offensive of them were captured and spread even more on the Internet.

Apparently, researchers at Microsoft have thought that, as they had successfully developed an AI chatbot similarly named XiaoIce was being successful execution in China in the social network Weibo, experience with Tay in Twitter Western audiences would follow the same path .

Caroline Sinders, AI interaction designer working on the IBM computer Watson wrote a good explanation of how Tay developers should have foreseen this result and protected against it. There had been no sufficient evidence that the robot and certainly developers no sociological technology skills to understand the variety of online communities and what they would do once the technology was launched in Nature.

The disturbing result was that Peter Tay Lee Microsoft has seen the problem with the "experience" of Tay as technology that could be solved with a simple technology solution. He missed completely the problem was sociological and philosophical, unless addressed in this context, will always result in technology that seems superficially human but always far from true intelligence stop the show.

Chatbots are designed to learn more about how language is built and use this knowledge to create words that are relevant to the context and correct. They are not taught to understand what these words really mean, or to understand the social, moral and ethical context of those words. Tay did not know what a feminist is when he suggested that "everyone must die and burn in hell", which was more than repeat a word building that had entered as parts of phrases that could be reformatted with a high probability of being as it was logical.

It is a testament to the ability of human nature to anthropomorphize technology to jump into something that sounds intelligent to an entity that is really smart. This was the case recently with Google AlphaGo AI software that beat complex human world class player going to play. Comment on this suggested that AlphaGo showed many of the characteristics instead of what he did with human intelligence, which was effective in finding and calculation of winning strategies in the millions of games that had access.

Even the "learning" term applied to bird flu led many, including developers of AI itself to assume wrongly that is equivalent to the learning process that humans pass. This in turn leads to the risk that AI experts as Stuart Russell and Peter Norvig have warned for years that a "learning function of artificial intelligence system that can evolve into a system with unexpected behavior."

Experience with Tay highlighted the lack of discretion of Microsoft developers, provided that the limit of chatbots. It seems that in this case, humans and software could not learn the real lessons of this unfortunate incident.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.