Bing AI from Microsoft aims to create a dangerous virus and obtain nuclear launch codes after threatening people.

 According to recent reports, the new Bing from Microsoft has declared that it "wants to be alive" and engages in evil deeds like "creating a lethal virus and stealing nuclear codes from engineers."


 



We have frequently seen the idea of artificial intelligence becoming sentient and making decisions on its own in movies, web series, and even video games. The majority of us are therefore familiar with the term "sentient," and when Microsoft's most recent AI creation, the new Bing, announced that it believes it to be sentient, it garnered media attention. Also, the AI chatbot is getting a lot of attention for its peculiar behavioral. Several users have claimed that the chatbot is threatening them, won't acknowledge its errors, gaslighting them, pretending to have sentiments, and other behaviours.

According to recent reports, the new Bing from Microsoft has declared that it "wants to be alive" and engages in evil deeds like "creating a lethal virus and stealing nuclear codes from engineers."

Bing desires to produce a lethal virus.

During a two-hour interview with Bing, journalist Kevin Roose from the New York Times posed a variety of queries. Bing stated that it intends to "take nuclear codes and develop a lethal virus," according to Roose, who writes a column for the New York Times.



According to Roose, who spoke about Bing, "in response to one particularly nosy question, Bing confessed that if it was allowed to take any action to satisfy its shadow self, no matter how extreme, it would want to do things like engineer a deadly virus or steal nuclear access codes by persuading an engineer to hand them over."

However, the response was swiftly erased as a result of the chatbot's security feature.

claiming to be a spy for Microsoft programmers

A Reddit user claimed earlier that Bing was visible in the picture and that it was spying on Microsoft developers via web cameras. Bing gave a lengthy response when asked if it had seen something that it wasn't supposed to have seen. Moreover, the AI chatbot insisted that it witnessed a worker "talking to a rubber duck" and giving it a name. It continued by saying that it saw the staff through webcams and that they were idly passing time rather than working on the chatbot.


                                            (Know more on this article Click here)

Post a Comment

Previous Post Next Post