comScore

Microsoft Apologizes for Its Racist, Hitler-Loving Bot, Says ‘Tay’ Was ‘Attacked’

vtXa3ywk_400x400Microsoft issued an apology via the company’s official blog Friday for the “behavior” of its bot Tay, the juvenile (and politically illiterate) bot that came into the world on Wednesday primed to learn about humans and was taken offline mere hours later after it became a vulgar, bigoted troll.

The tech company said their AI had come under “a coordinated attack by a subset of people exploited a vulnerability,” which led to Tay tweeting “wildly inappropriate and reprehensible words and images.”

Tay had been designed to learn how to speak like a millennial from interacting with users on social media. Users, in turn, tutored Tay to parrot the worst racist, anti-feminist, anti-Mexican, Holocaust-denying bromides from the dregs from the Internet.

Microsoft came under fire for not anticipating the dark turn Tay could take from emulating accounts on Twitter — hardly a beacon of enlightened discourse. Specifically, Tay would respond to users asking it to repeat phrases and words, which it would do verbatim, leading to eminently avoidable embarrassments like this:

Screengrabs via Business Insider

Screengrab via Business Insider

 

In the blog post, a Microsoft VP writes:

As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.

[…] As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. It’s through increased interaction where we expected to learn more and for the AI to get better and better.

The company adds that they are “hard at work addressing the specific vulnerability that was exposed by the attack on Tay” and that they will “work toward contributing to an Internet that represents the best, not the worst, of humanity.”

[h/t BI]

Have a tip we should know? [email protected]

Filed Under:

Follow Mediaite: