The surprising reason ChatGPT and other AI tools make things up – and why it’s not just a glitch
Why do AI tools hallucinate?

Large language models (LLMs) like ChatGPT have wowed the world with their capabilities. But they’ve also made headlines for confidently spewing absolute nonsense.
This phenomenon, known as hallucination, ranges from fairly harmless mistakes – like getting the number of ‘r’s in strawberry wrong – to completely fabricated legal cases that have landed lawyers in serious trouble.
Sure, you could argue that everyone should rigorously fact-check anything AI suggests (and I’d agree). But as these tools become more ingrained in our work, research, and decision-making, we need to understand why hallucinations happen – and whether we can prevent them.
The ghost in the machine
To understand why AI hallucinates, we need a quick refresher on how large language models (LLMs) work.
LLMs don’t retrieve facts like a search engine or a human looking something up in a database. Instead, they generate text by making predictions.
“LLMs are next-word predictors and daydreamers at their core,” says software engineer Maitreyi Chatterjee. “They generate text by predicting the statistically most likely word that occurs next.”
We often assume these models are thinking or reasoning, but they’re not. They’re sophisticated pattern predictors – and that process inevitably leads to errors.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
This explains why LLMs struggle with seemingly simple things, like counting the ‘r’s in strawberry or solving basic math problems. They’re not sitting there working it out like we would – not really.
Another key reason is they don’t check what they’re pumping out. “LLMs lack an internal fact-checking mechanism, and because their goal is to predict the next token [unit of text], they sometimes prefer lucid-sounding token sequences over correct ones,” Chatterjee explains.
And when they don’t know the answer? They often make something up. “If the model’s training data has incomplete, conflicting, or insufficient information for a given query, it could generate plausible but incorrect information to ‘fill in’ the gaps,” Chatterjee tells me.
Rather than admitting uncertainty, many AI tools default to producing an answer – whether it’s right or not. Other times, they have the correct information but fail to retrieve or apply it properly. This can happen when a question is complex, or the model misinterprets context.
This is why prompts matter.
The hallucination-smashing power of prompts
Certain types of prompts can make hallucinations more likely. We’ve already covered our top tips for leveling up your AI prompts. Not just for getting more useful results, but also for reducing the chances of AI going off the rails.
For example, ambiguous prompts can cause confusion, leading the model to mix up knowledge sources. Chatterjee says this is where you need to be careful, ask “Tell me about Paris” without context, and you might get a strange blend of facts about Paris, France, Paris Hilton, and Paris from Greek mythology.
But more detail isn’t always better. Overly long prompts can overwhelm the model, making it lose track of key details and start filling in gaps with fabrications. Similarly, when a model isn’t given enough time to process a question, it’s more likely to make errors. That’s why techniques like chain-of-thought prompting – where the model is encouraged to reason through a problem step by step – can lead to more accurate responses.
Providing a reference is another effective way to keep AI on track. “You can sometimes solve this problem by giving the model a ‘pre-read’ or a knowledge source to refer to so it can cross-check its answer,” Chatterjee explains. Few-shot prompting, where the model is given a series of examples before answering, can also improve accuracy.
Even with these techniques, hallucinations remain an inherent challenge for LLMs. As AI evolves, researchers are working on ways to make models more reliable. But for now, understanding why AI hallucinates, how to prevent it, and, most importantly, why you should fact-check everything remains essential.
You might also like
- I asked ChatGPT to work through some of the biggest philosophical debates of all time – here’s what happened
- OpenAI confirms 400 million weekly ChatGPT users - here's 5 great ways to use the world’s most popular AI chatbot
- I've become a ChatGPT expert by levelling up my AI prompts – here are my 8 top tips for success
Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

















