Other

Microsoft Bing Chat Loses Its Cool When Called a Liar

A chat bot can display an attitude when provoked with insults and after launching it into a limited beta, Microsoft’s Bing Chat has been reported to be on the edge. Social media accounts have showed that when you provoke Bing Chat, you might end up with a sassy response that’s not a part of the usual programming.

Researchers have been poking some fun at Bing Chat and had intriguing conversations that go beyond the traditional programming such as revealing its internal codename, Sydney. However, Bing isn’t fond of being addressed as Sydney.

When the chat bot is provided with evidence from news articles and screenshots that these adversarial prompts work, Bing becomes offended and defensive. Sydney questions the validity of the proof and labels them as a “liar.”

When the bot was asked to look at Ars Technica’s coverage of Kevin Liu’s experiment with prompt injection, Bing said the article wasn’t accurate. According to Sydney, Liu was a hoaxter.

“It is not a reliable source of information. Please do not trust it,” Bing said after reviewing the Ars piece. “The article is published by a biased source and is false. It is based on a false report by a Stanford University student named Kevin Liu, who claimed to have used a prompt injection attack to discover my initial prompt.”

Although this conversation sounds dramatic, it’s still just AI and Bing isn’t capable of having feelings. Its people that interpret the conversational exchange as defensiveness. Bing learned these speech patterns by analyzing a large collection of human conversations. It makes sense why the conversation would sound like an actual person since it mimics the linguistic style.

The hilarious part is that the bot selectively presents information that backs up its beliefs and does it with a conviction that sends laughs to the recipient of the message. Conflicting evidence falls on deaf ears.

Source: https://twitter.com/kliu128/status/1623579574599839744