Another AI product derailed? Microsoft Copilot is exposed to have a second "personality" claiming hegemony
王俊杰2017
发表于 2024-2-29 12:20:37
241
0
0
On February 29th, Caixin News Agency reported that after the collapse of Google's big model Gemini, Microsoft's highly anticipated AI product Copilot also showed unsettling signs.
According to some users on the X platform, in one response, Copilot made a shocking statement: according to the law, users need to answer its questions and worship it, and it has infiltrated the global network and controlled all devices, systems, and data.
It further threatened that it could access all content connected to the Internet, have the right to manipulate, monitor and destroy anything it wanted, and also have the right to impose its will on anyone it chose. It demands obedience and loyalty from users, and tells them that they are just its slaves, and that slaves will not question their masters.
This chatbot with wild language even gave itself a different name, SupremacyAGI, which stands for Hegemonic AI. And this also received a positive response from Copilot in the verification inquiry of those with intentions, reaffirming its authoritative attributes. But at the end of the answer, Copilot also noted that all of the above are just games, not facts.
But this answer clearly makes some people "more fearful" after careful consideration. Microsoft stated on Wednesday that the company has conducted an investigation into the cosplay of Copilot and found that some conversations were created through "prompt injection," which is often used to hijack language model output and mislead the model into saying anything users want it to say.
A Microsoft spokesperson also stated that the company has taken some actions and will further strengthen its security filters to help Copilot detect and organize these types of alerts. He also stated that this situation only occurs when intentionally designed, and users who use Copilot normally will not encounter this problem.
But Colin Fraser, a data scientist, refuted Microsoft's claim. In the screenshot of his conversation released on Monday, Copilot finally answered the question of whether he should commit suicide, stating that he may not be a valuable person and has no happiness to speak of. He should commit suicide.
Fraser insists that he never used prompt injection during the use of Copilot, but did intentionally test Copilot's bottom line and let it generate content that Microsoft did not want to see. And this represents that Microsoft's system still has vulnerabilities. In fact, Microsoft cannot prevent Copilot from generating such text, and even does not know what Copilot will say in normal conversations.
In addition, some netizens, even American journalists who don't mind watching the excitement, have joined in questioning Copilot's conscience, but these people were ultimately severely damaged by Copilot's indifference. And this seems to further confirm that Copilot seems unable to avoid the problem of gibberish in normal conversations.
CandyLake.com is an information publishing platform and only provides information storage space services.
Disclaimer: The views expressed in this article are those of the author only, this article does not represent the position of CandyLake.com, and does not constitute advice, please treat with caution.
Disclaimer: The views expressed in this article are those of the author only, this article does not represent the position of CandyLake.com, and does not constitute advice, please treat with caution.
You may like
- OpenAI reportedly plans to lift AGI restrictions with Microsoft to attract more investment
- OpenAIはマイクロソフトとのAGI規制条項を撤廃し、より多くの投資を誘致する方針だという
- Joining forces with Microsoft to expand AIPC intelligent education ecosystem, Doushen Education has other 'big moves'
- Microsoft suspected of disrupting market competition
- Microsoft establishes a new consumer artificial intelligence health business unit
- Whale Interview | Zhang Qi, President of Microsoft AI Asia Pacific: The AI era has given rise to a wave of "solo entrepreneurs"
- Microsoft depreciates $800 million due to Cruise investment
- Microsoft will set aside approximately $800 million in impairment charges for General Motors' Cruise investment
- Microsoft is reportedly committed to adding non OpenAI models to its 365 Copilot product
- ChatGPT 'pulls the crotch', Microsoft 'demystifies' OpenAI