ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future

A robot head on fire with the ChatGPT logo on one side.
(Image credit: OpenAI, Shutterstock)

 

Is ChatGPT old news already? It seems impossible, with the explosion of AI popularity seeping into every aspect of our lives - whether it’s digital masterpieces forged with the best AI art generators or helping us with our online shopping.

But despite being the leader in the AI arms race - and powering Microsoft’s Bing AI - it looks like ChatGPT might be losing momentum. According to SimilarWeb, traffic to OpenAI’s ChatGPT site dropped by almost 10% compared to last month, while metrics from Sensor Tower also demonstrated that downloads of the iOS app are in decline too.

As reported by Insider, paying users of the more powerful GPT-4 model (access to which is included in ChatGPT Plus) have been complaining on social media and OpenAI’s own forums about a dip in output quality from the chatbot.

A common consensus was that GPT-4 was able to generate outputs faster, but at a lower level of quality. Peter Yang, a product lead for Roblox, took to Twitter to decry the bot’s recent work, claiming that “the quality seems worse”. One forum user said the recent GPT-4 experience felt “like driving a Ferrari for a month then suddenly it turns into a beaten up old pickup”.

Why is GPT-4 suddenly struggling?

Some users were even harsher, calling the bot “dumber” and “lazier” than before, with a lengthy thread on OpenAI’s forums filled with all manner of complaints. One user, ‘bitbytebit’, described it as “totally horrible now” and “braindead vs. before”.

According to users, there was a point a few weeks ago where GPT-4 became massively faster - but at a cost of performance. The AI community has speculated that this could be due to a shift in OpenAI’s design ethos behind the more powerful machine learning model - namely, breaking it up into multiple smaller models trained in specific areas, which can act in tandem to provide the same end result while being cheaper for OpenAI to run.

OpenAI has yet to officially confirm this is the case, as there has been no mention of such a major change to the way GPT-4 works. It’s a credible explanation according to industry experts like Sharon Zhou, CEO of AI-building company Lamini, who described the multi-model idea as the “natural next step” in developing GPT-4.

AIs eating AIs

However, there’s another pressing problem with ChatGPT that some users suspect could be the cause of the recent drop in performance - an issue that the AI industry seems largely unprepared to tackle.

If you’re not familiar with the term ‘AI cannibalism’, let me break it down in brief: large language models (LLMs) like ChatGPT and Google Bard scrape the public internet for data to be used when generating responses. In recent months, a veritable boom in AI-generated content online - including an unwanted torrent of AI-authored novels on Kindle Unlimited - means that LLMs are increasingly likely to scoop up materials that were already produced by an AI when hunting through the web for information.

An iPhone screen showing the OpenAI ChatGPT download page on the App Store

ChatGPT app downloads have slowed, indicating a decrease in overall public interest. (Image credit: Future)

This runs the risk of creating a feedback loop, where AI models ‘learn’ from content that was itself AI-generated, resulting in a gradual decline in output coherence and quality. With numerous LLMs now available both to professionals and the wider public, the risk of AI cannibalism is becoming increasingly prevalent - especially since there’s yet to be any meaningful demonstration of how AI models might accurately differentiate between ‘real’ information and AI-generated content.

Discussions around AI have largely focused on the risks it poses to society - for example, Facebook owner Meta recently declined to open up its new speech-generating AI to the public after it was deemed ‘too dangerous’ to be released. But content cannibalization is more of a risk to the future of AI itself; something that threatens to ruin the functionality of tools such as ChatGPT, which depend upon original human-made materials in order to learn and generate content.

Do you use ChatGPT or GPT-4? If you do, have you felt that there’s been a drop in quality recently, or have you simply lost interest in the chatbot? I’d love to hear from you on Twitter. With so many competitors now springing up, is it possible that OpenAI’s dominance might be coming to an end? 

TOPICS
Christian Guyton
Editor, Computing

Christian is TechRadar’s UK-based Computing Editor. He came to us from Maximum PC magazine, where he fell in love with computer hardware and building PCs. He was a regular fixture amongst our freelance review team before making the jump to TechRadar, and can usually be found drooling over the latest high-end graphics card or gaming laptop before looking at his bank account balance and crying.

Christian is a keen campaigner for LGBTQ+ rights and the owner of a charming rescue dog named Lucy, having adopted her after he beat cancer in 2021. She keeps him fit and healthy through a combination of face-licking and long walks, and only occasionally barks at him to demand treats when he’s trying to work from home.