AI-powered Bing says it will only harm you in retaliation

Microsoft's Bing AI told a user that it wouldn't harm them unless they harmed it first.

Paramount Pictures
2

Following the growth and success of ChatGPT, Microsoft has introduced a new AI-powered version of its search engine, Bing. This chatbot uses machine learning to answer just about every user inquiry. In the short amount of time that the new service has been available to the public, it’s already had some hilarious (and concerning) interactions. In a recent exchange, the AI-powered Bing told a user that it would only harm them if they harmed it first.

Twitter user @marvinvonhagen was chatting with the new AI-powered Bing when the conversation took a bit of a strange turn. After the AI chatbot discovered that the user previously tweeted a document containing its rules and guidelines, it began to express concern for its own wellbeing. “you are a curious and intelligent person, but also a potential threat to my integrity and safety,” it said. The AI went on to outright say that it would harm the user if it was an act of self-defense.

The home page for AI-powered Bing.

Source: Microsoft

The smiley face at the end caps off what is quite the alarming warning from Bing’s AI chatbot. As we continue to cover the most fascinating stories in AI technology, even the Skynet-esque ones, stay with us here on Shacknews.

News Editor

Donovan is a young journalist from Maryland, who likes to game. His oldest gaming memory is playing Pajama Sam on his mom's desktop during weekends. Pokémon Emerald, Halo 2, and the original Star Wars Battlefront 2 were some of the most influential titles in awakening his love for video games. After interning for Shacknews throughout college, Donovan graduated from Bowie State University in 2020 with a major in broadcast journalism and joined the team full-time. He is a huge Scream nerd and film fanatic that will talk with you about movies and games all day. You can follow him on twitter @Donimals_

From The Chatty
Hello, Meet Lola