OpenAI is worried that ChatGPT-4o users are developing feelings for the chatbot
What happens when you forget an AI isn’t a real person?
The introduction of GPT-4o has been seen as a major step up in the abilities of OpenAI’s ChatGPT chatbot, as it's now able to produce more lifelike responses and can work with a wider range of inputs. However, there may be a downside to this increased sophistication, with OpenAI itself warning that GPT-4o’s capabilities seem to be causing some users to become increasingly attached to the chatbot, with potentially worrying consequences.
Writing in a recent 'system card' blog post for GPT-4o, OpenAI outlined many of the risks associated with the new chatbot model. One of them is “anthropomorphization and emotional reliance,” which “involves attributing human-like behaviors and characteristics to nonhuman entities, such as AI models.”
When it comes to GPT-4o, OpenAI says that “During early testing … we observed users using language that might indicate forming connections with the model. For example, this includes language expressing shared bonds, such as “This is our last day together”.”
As the blog post explained, such behavior may seem innocent on the surface, but it has the potential to lead to something more problematic, both for individuals and for society at large. To skeptics, it will come as further evidence of the dangers of AI and of the rapid, unregulated development of the technology.
Falling in love with AI
As OpenAI’s blog post admits, forming attachments to an AI might reduce a person’s need for human-to-human interactions, which in turn may affect healthy relationships. As well as that, OpenAI states that ChatGPT is “deferential,” allowing users to interrupt and take over conversations. That kind of behavior is seen as normal with AIs, but it’s rude when done with other humans. If it becomes more normalized, OpenAI believes it could impact regular human interactions.
The subject of AI attachment is not the only warning that OpenAI issued in the post. OpenAI also noted that GPT-4o can sometimes “unintentionally generate an output emulating the user’s voice” – in other words, it could be used to impersonate someone, giving everyone from criminals to malicious ex-partners opportunities to engage in nefarious activities.
Yet while OpenAI says it has enacted measures to mitigate this and other risks, when it comes to users becoming emotionally attached to ChatGPT it doesn’t appear that OpenAI has any specific measures in place yet. The company merely said that “We intend to further study the potential for emotional reliance, and ways in which deeper integration of our model’s and systems’ many features with the audio modality may drive behavior.”
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Considering the clear risks of people becoming overly dependent on an artificial intelligence, and the potential wider ramifications if this happens on a large scale, one would hope that OpenAI has a plan that it's able to deploy sooner rather than later. Otherwise, we could be looking at another example of an insufficiently regulated new technology having worrying unintended consequences for individuals, and for society as a whole.
You might also like
Alex Blake has been fooling around with computers since the early 1990s, and since that time he's learned a thing or two about tech. No more than two things, though. That's all his brain can hold. As well as TechRadar, Alex writes for iMore, Digital Trends and Creative Bloq, among others. He was previously commissioning editor at MacFormat magazine. That means he mostly covers the world of Apple and its latest products, but also Windows, computer peripherals, mobile apps, and much more beyond. When not writing, you can find him hiking the English countryside and gaming on his PC.