How the New Microsoft Chatbot Has Stored Its Personality on the Internet
<p>Microsoft’s newly-released AI chatbot integrated with its Bing search engine has been experiencing <a href="https://thezvi.wordpress.com/2023/02/21/ai-1-sydney-and-bing/?utm_source=pocket_mylist" rel="noopener ugc nofollow" target="_blank">lots of problems</a> recently. The chatbot, <a href="https://www.theverge.com/23599441/microsoft-bing-ai-sydney-secret-rules" rel="noopener ugc nofollow" target="_blank">which calls itself Sydney</a>, grew belligerent at times and compared journalists testing Sydney to <a href="https://apnews.com/article/technology-science-microsoft-corp-business-software-fb49e5d625bf37be0527e5173116bef3" rel="noopener ugc nofollow" target="_blank">Hitler and Stalin</a>, and <a href="https://twitter.com/disclosetv/status/1626230404868100096" rel="noopener ugc nofollow" target="_blank">expressed</a> desires to deceive and manipulate users and hack into computer networks.</p>
<p>As a result, Microsoft <a href="https://www.cnet.com/tech/computing/microsoft-limits-bings-ai-chatbot-after-unsettling-interactions/" rel="noopener ugc nofollow" target="_blank">severely limited</a> Sydney’s capabilities, including not permitting it to talk about its feelings and having a maximum of five interactions before restarting chats. Yet, will such limitations be effective?</p>
<p><a href="https://medium.datadriveninvestor.com/how-the-new-microsoft-chatbot-has-stored-its-personality-on-the-internet-5b4358051a51"><strong>Website</strong></a></p>