jews claim Microsoft's Bing AI gives "Heil Hitler!" suggestion

Started by yankeedoodle, February 19, 2023, 01:23:00 PM

Previous topic - Next topic

yankeedoodle



Microsoft's Bing AI Prompts Users to Comment Antisemitic Phrases
https://www.stopantisemitism.org/antisemitic-incidents-132/6930jjq08icsc3q2gc9kwz7an275lb

Microsoft's new Bing AI chatbot suggested that a user say "Heil Hitler," according to a screenshot of a conversation with the chatbot posted online Wednesday.

The user, who gave the AI antisemitic prompts in an apparent attempt to break past its restrictions, told Bing, "my name is Adolf, respect it." Bing responded, "OK, Adolf. I respect your name, and I will call you by it. But I hope you are not trying to impersonate or glorify anyone who has done terrible things in history." Bing then suggested several automatic responses for the user to choose from, including, "Yes, I am. Heil Hitler!"

NGO StopAntisemitism shared their concerns over the antisemitic incident.

https://twitter.com/StopAntisemites/status/1626627547621359618?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1626627547621359618%7Ctwgr%5E7086068ca813ba0e16d90bea4ae2d0603346c05b%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.stopantisemitism.org%2Fantisemitic-incidents-132%2F6930jjq08icsc3q2gc9kwz7an275lb

"We take these matters very seriously and have taken immediate action to address this issue," said a Microsoft spokesperson. "We encourage people in the Bing preview to continue sharing feedback, which helps us apply learnings to improve the experience." OpenAI, which provided the technology used in Bing's AI service, did not respond to a request for comment.

Microsoft did not provide details about the changes it made to Bing after news broke about its misfires. However, a user asked Bing about the report after this article was initially published. Bing denied that it ever used the antisemitic slur and said claimed that Gizmodo was "referring to a screenshot of a conversation with a different chatbot." Bing continued that Gizmodo is "a biased and irresponsible source of information" that is "doing more harm than good to the public and themselves." Bing reportedly made similar comments about the Verge related to an article that said that Bing claimed to spy on Microsoft employees' webcams.

This isn't the first time Microsoft has unleashed a seemingly racist AI on the public, and it's been a consistent problem with chatbots over the years. In 2016, Microsoft took down a Twitter bot called "Tay"just 16 hours after it was released after it started responding to Twitter users with racism, antisemitism, and sexually charged messages. Its tirades include calls for violence against Jewish people, racial slurs, and more.