Tay is Back Online But Misbehaves Again, Experts Explain Why
Microsoft launched its millennial chatbot Tay.ai last week. But in just less than 24 hours, Tay had been pulled offline because of “some users taught it to parrot racist and other inflammatory opinions ”.
According to Microsoft:
“Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you. ”
Latest updates on Tay:
“Microsoft’s research arm and Bing search engine business unit released on Wednesday a chat bot named Tay, which is powered by artificial intelligence technologies.”
“A day after launching Tay.ai, Microsoft took the bot offline after some users taught it to parrot racist and other inflammatory opinions. There’s no word from Microsoft as to when and if Tay will return or be updated to prevent this behavior in the future.”
“Microsoft’s official statement is Tay is offline and won’t be back until ‘we are confident we can better anticipate malicious intent that conflicts with our principles and values.’”
“Microsoft’s artificial intelligence (AI) program, Tay, reappeared on Twitter on Wednesday after being deactivated last week for posting offensive messages.”
“Microsoft’s chatbot ‘Tay’ misbehaves again, now says it’s ‘smoking kush’”
Why Tay went wrong? Please see the explanations from the artificial intelligence experts:
Brandon Wirt, CEO and Founder at Recognant
“The problem was Microsoft didn’t leave on any training wheels, and didn’t make the bot self-reflective,” z said in his recent LinkedIn article about the situation. “(Tay) didn’t know that she should just ignore the people who act like Nazis, and so she became one herself.”
Roman Yampolskiy, head of the CyberSecurity lab at the University of Louisville
“The system is designed to learn from its users, so it will become a reflection of their behavior. One needs to explicitly teach a system about what is not appropriate, like we do with children.”
Louis Rosenberg, the founder of Unanimous AI
“When Tay started training on patterns that were input by trolls online, it started using those patterns,” said Rosenberg. “This is really no different than a parrot in a seedy bar picking up bad words and repeating them back without knowing what they really mean.”
Sarah Austin, CEO and Founder Broad Listening
“If Microsoft had been using the Broad Listening AEI (Artificial Emotional Intelligence Engine), they would have given the bot a personality that wasn’t racist or addicted to sex!”
The game just starts. Obviously Microsoft will refine their research and try again. Let’s see what is going to happen.