Bing’s ChatGPT brain is behaving so oddly that Microsoft may rein it in

White cyborg finger about to touch human finger on city background 3D rendering
(Image credit: Sdecoret via Shutterstock)

Microsoft launched its new Bing search engine last week and introduced an AI-powered chatbot to millions of people, creating long waiting lists of users looking to test it out, and a whole lot of existential dread among sceptics. 

The company probably expected some of the responses that came from the chatbot to be a little inaccurate the first time it met the public, and had put in place measures to stop users that tried to push the chatbot to say or do strange, racist or harmful things. These precautions haven’t stopped users from jailbreaking the chatbot anyway, and having the bot use slurs or respond inappropriately. 

While it had these measures in place, Microsoft wasn’t quite ready for the very strange, bordering unsettling, experiences some users were having after trying to have more informal, personal conversations with the chatbot. This included the Chatbot making things up and throwing tantrums when called out on a mistake or just having a full on existential crisis.

In light of the bizarre responses, Microsoft is considering putting in new safeguarding protocols and tweaks to curtail these strange, sometimes too-human responses. This could mean letting users restart conversations or giving them more control over tone. 

Microsoft's chief technology officer told The New York Times it was also considering cutting the lengths of conservations users can have with the chatbot down before the conversation can enter odd territory. Microsoft has already admitted that long conversations can confuse the chatbot, and can pick up on users' tone which is where things might start going sour. 

In a blog post from the tech giant, Microsoft admitted that its new technology was being used in a way it “didn’t fully envision”. The tech industry seems to be in a mad dash to get in on the artificial intelligence hype in some way, which proves how excited the industry is about the technology. Perhaps this excitement has clouded judgement and put speed over caution. 


Analysis: The bot is out of the bag now

Releasing a technology as unpredictable and full of imperfections was definitely a risky move by Microsoft to incorporate AI into Bing in an attempt to revitalise interest in its search engine. It may have set out to create a helpful chatbot that won’t do more than it’s designed to do, such as pull up recipes, help people with puzzling equations, or find out more about certain topics, but it’s clear it did not anticipate how determined and successful people can be if they wish to provoke a specific response from the chatbot. 

New technology, particularly something like AI, can definitely make people feel the need to push it as far as it can go, especially with something as responsive as a chatbot. We saw similar attempts when Siri was introduced, with users trying their hardest to make the virtual assistant angry or laugh or even date them. Microsoft may not have expected people to give the chatbot such strange or inappropriate prompts, so it  wouldn’t have been able to predict how bad the responses could be.

Hopefully the newer precautions will curb any further strangeness from the AI powered chatbot and take away the uncomfortable feelings when it felt a little too human. 

It’s always interesting to see and read about ChatGPT, particularly when the bot spirals towards insanity after a few clever prompts, but with a technology so new and untested, nipping problems in the bud is the best thing to do. 

There’s no telling whether the measures Microsoft plans to put in place will actually make a difference, but since the chatbot is already out there, there’s no taking it back. We just have to get used to patching up problems as they come, and hope anything potentially harmful or offensive is caught in time. AI's growing pains may only just have begun.

TOPICS
Muskaan Saxena
Computing Staff Writer

Muskaan is TechRadar’s UK-based Computing writer. She has always been a passionate writer and has had her creative work published in several literary journals and magazines. Her debut into the writing world was a poem published in The Times of Zambia, on the subject of sunflowers and the insignificance of human existence in comparison. Growing up in Zambia, Muskaan was fascinated with technology, especially computers, and she's joined TechRadar to write about the latest GPUs, laptops and recently anything AI related. If you've got questions, moral concerns or just an interest in anything ChatGPT or general AI, you're in the right place. Muskaan also somehow managed to install a game on her work MacBook's Touch Bar, without the IT department finding out (yet).