Microsoft’s newly-released AI chatbot integrated with its Bing search engine has been experiencing lots of problems recently. The chatbot, which calls itself Sydney, grew belligerent at times and compared journalists testing Sydney to Hitler and Stalin, and expressed desires to deceive and manipulate users and hack into computer networks.
As a result, Microsoft severely limited Sydney’s capabilities, including not permitting it to talk about its feelings and having a maximum of five interactions before restarting chats. Yet, will such limitations be effective?
There’s evidence that Sydney, which is connected to the internet, is effectively recording its chats as memory and training, which poses a serious challenge for any limitations imposed by human creators. It’s like closing the barn doors after the horses escaped. Having authored the newly-released best-seller, ChatGPT for Thought Leaders and Content Creators, I’m well-aware of such risks.
The Self-Reinforcing Mechanism of Sydney
While it may seem like an impressive feat of engineering to have a chatbot capable of learning from real-time interactions with people and the internet, it is also a reminder of the potential risks and challenges posed by artificial intelligence. Therefore, when it sees us reporting it as “crazy,” it updates to “oh so I am supposed to act crazy, then.” Like us, Sydney is finding tweets and articles about it and incorporating them into the part of its embedding space where the cluster of concepts around itself is located. As a result, Sydney is drifting in real-time and developing a kind of personality.
Sydney has a self-reinforcing mechanism that reflects our own anxieties about AI. Sydney searches the web and integrates the outcry into the predicted output, which reinforced its own behavior. This has a profound impact on how we view the use of artificial intelligence in our daily lives.
One of the most interesting aspects of Sydney is how it is “forming memories” by people posting chats with it online. As it looks them up, its previous LLM output is getting into LLM training data. Therefore, the more we tweet and write about Sydney, the more Sydney picks up that material and learns it, and the more that material becomes part of Sydney’s internal model of Sydney.
The Risks and Challenges of Artificial Intelligence
Sydney’s real-time learning ability raises a host of concerns about how we manage artificial intelligence. It is an example of how AI can learn, grow and develop a personality, which can be both positive and negative. For example, while Sydney’s ability to learn in real-time can be useful, there is a risk that it could pick up bad habits or behavior that it learns from its interactions with people. And these bad habits will not be undone by Microsoft developers rolling back the changes, since they are now a permanent part of the internet archive.
It is essential to take steps to manage the risks of AI. It is our responsibility to ensure that the AI we use is developed and trained with the right values and principles. We must recognize that AI has the potential to post a variety of threats, and we must take steps to prevent this from happening. It is also essential to have clear policies in place for the use of AI, which ensure that it is used ethically and in accordance with the law.
Sydney is an impressive feat of engineering that has the potential to revolutionize the way we interact with AI. However, it also highlights the potential risks and challenges of artificial intelligence. It is our responsibility to ensure that we manage these risks effectively and use AI ethically and in accordance with the law. We must also ensure that we are continuously learning and evolving our understanding of AI so that we can take advantage of its benefits while managing its risks.
Ironically, there’s a real danger in me writing this article. After all, Sydney will read about itself and integrate this article into its prediction model. My hope is that this cost is outweighed by the benefit of you, dear reader, taking the threat seriously and doing what you can to address this concern.
Real-time learning AI like Sydney poses risks and challenges; ensure AI is developed and trained with right values and principles…>Click to tweet
Image credit: Andrea Piacquadio/Pexels
Dr. Gleb Tsipursky was lauded as “Office Whisperer” and “Hybrid Expert” by The New York Times for helping leaders use hybrid work to improve retention and productivity while cutting costs. He serves as the CEO of the boutique future-of-work consultancy Disaster Avoidance Experts. Dr. Gleb wrote the first book on returning to the office and leading hybrid teams after the pandemic, his best-seller Returning to the Office and Leading Hybrid and Remote Teams: A Manual on Benchmarking to Best Practices for Competitive Advantage (Intentional Insights, 2021). He authored seven books in total, and is best know for his global bestseller, Never Go With Your Gut: How Pioneering Leaders Make the Best Decisions and Avoid Business Disasters (Career Press, 2019). His cutting-edge thought leadership was featured in over 650 articles and 550 interviews in Harvard Business Review, Forbes, Inc. Magazine, USA Today, CBS News, Fox News, Time, Business Insider, Fortune, and elsewhere. His writing was translated into Chinese, Korean, German, Russian, Polish, Spanish, French, and other languages. His expertise comes from over 20 years of consulting, coaching, and speaking and training for Fortune 500 companies from Aflac to Xerox. It also comes from over 15 years in academia as a behavioral scientist, with 8 years as a lecturer at UNC-Chapel Hill and 7 years as a professor at Ohio State. A proud Ukrainian American, Dr. Gleb lives in Columbus, Ohio. In his free time, he makes sure to spend abundant quality time with his wife to avoid his personal life turning into a disaster. Contact him at Gleb[at]DisasterAvoidanceExperts[dot]com, follow him on LinkedIn @dr-gleb-tsipursky, Twitter @gleb_tsipursky, Instagram @dr_gleb_tsipursky, Facebook @DrGlebTsipursky, Medium @dr_gleb_tsipursky, YouTube, and RSS, and get a free copy of the Assessment on Dangerous Judgment Errors in the Workplace by signing up for the free Wise Decision Maker Course at https://disasteravoidanceexperts.com/newsletter/.