I stopped saying thanks to ChatGPT – here's what happened

Woman drinking coffee and looking happily at laptop
(Image credit: andresr / Getty Images)

The first time ChatGPT responded to me, I instinctively typed, “Thank you, ChatGPT!” It just felt natural. But then I noticed a debate in which some people argued against using polite language with AI, claiming we should stick to direct, emotionless commands and avoid treating technology like a human.

So, feeling curious and a little naive, I experimented. I stripped my prompts down to just instructions – no pleases, no thank yous, just blunt directives. But something felt off. My requests felt unnatural, and oddly enough, the responses seemed less helpful, too.

That got me wondering, could politeness be more than just a social nicety? Could it actually influence AI’s responses – or even how we interact with technology in the long run? To find out, I asked the experts.

Why are people polite to AI?

It turns out that being polite to AI isn’t unusual – it’s the norm. A December 2024 study by Future (the owner of TechRadar) found that 71% of UK respondents and 67% of US respondents say they’re polite to AI – so reassuringly, I’m not alone.

It’s easy to see why. Politeness is ingrained in us from childhood, and when interacting with AI, that habit naturally kicks in. As chatbots like ChatGPT become more sophisticated and human-like, many of us unconsciously treat them as more than just machines – which is known as anthropomorphism.

But it’s not just habit. The way we talk to AI may reveal deeper social and ethical considerations. Does our tone with AI reflect how we treat people in everyday life? Could being respectful to chatbots reinforce positive communication habits overall?

Some people even have a cautious superstition about it. In Future’s AI study, 12% of US respondents said they’re polite to AI because they believe it will "remember" them if it ever reaches Skynet (the fictional AGI from Terminator) levels of sentience. While this might sound far-fetched, it highlights a growing unease about the expanding role of AI in our lives – and an over-reliance on sci-fi to understand it, but that's a topic for another day.

Ultimately, though, the big question is, does politeness actually change AI’s responses – or just our perception of them?

How does politeness impact ChatGPT's responses?

Honestly, the answer to that question is complex and different depending on who you speak to. But the short answer is, sort of.

“From a technical perspective, being polite generally doesn't impact the actual accuracy of AI responses,” says Maitreyi Chatterjee, a software engineer at LinkedIn. AI models process queries based on content, not tone. But there’s more to it.

“Software engineers like myself do usually train AI models to match the user’s communication style, and this can influence how we perceive the results,” she adds. In other words, AI mirrors our tone. If you phrase a question politely, the chatbot might respond in kind.

Devansh Agarwal, a machine learning engineer at AWS (Amazon), agrees but adds that the effect depends on the AI model itself. “It’s less about politeness directly affecting the response and more about understanding why this happens,” he explains.

For example, customer service chatbots are often designed to mirror user tone while avoiding conflict. “If a user is aggressive, the bot might try to de-escalate by keeping responses neutral and brief,” he says. “Conversely, in polite exchanges, the bot might offer more detailed answers, since there’s no risk of escalating tension.”

This mirroring effect can shape how we perceive AI’s reliability. “Research suggests that tone influences trust, even when factual accuracy remains unchanged,” says Chatterjee. In short: polite phrasing doesn’t make AI smarter, but it can make it feel more helpful.

It's all about context

Interestingly, politeness doesn’t just influence AI’s tone, it can actually improve response quality. But not because ChatGPT is helping you more because you're being nice to it. But by being nice to it, you're actually providing more context – maybe without even realizing it.

"Polite phrasing often leads to richer prompts, which in turn result in better responses," says Scott Valdez, the co-founder and CEO of Ari and Founder of VIDA Select. To illustrate his point, he asks me to compare these two prompts:

“Give me dating advice!” is a vague command that will likely get a generic response.

“Would you help me understand how to build dating confidence?” is more polite phrasing that adds layers of detail, which in turn provides:

  • A focus area (dating vs. general confidence)
  • A starting point (learning vs. demanding advice)
  • A timeline (building confidence vs. quick-fixing a problem)
  • A preferred response style (explanation vs. directive command)

So it’s not just that the second version is politer, it’s clearer. Valdez explains that this pattern snowballs over time. Friendly, well-structured queries create a positive feedback loop:

More context = better initial response.

Users engage more deeply with thoughtful replies.

Follow-up guidance becomes increasingly tailored.

Conversely, blunt prompts tend to create a downward spiral. Give ChatGPT a vague command, it'll give you a generic answer. You might get frustrated and give even shorter follow-ups, which leads to a decline in response quality.

Beyond individual interactions, politeness may also tap into how AI is trained.

“AI systems learn from millions of human exchanges, where polite, back-and-forth dialogue tends to produce richer, more nuanced conversations,” Valdez notes. “When users mirror this style, they unlock more sophisticated responses – like getting personalized advice instead of generic tips.”

What does the research say? Well, it's still early days. However, one 2024 study found that polite prompts did produce higher-quality responses from LLMs like ChatGPT. Conversely, impolite or aggressive prompts were associated with lower performance and even an increase in bias in AI-generated answers.

However, the really interesting part is that extreme politeness wasn’t necessarily beneficial either. The study found that "moderate politeness" led to the best results – suggesting that AI models, much like humans, respond best to balanced, clear communication.

Interestingly, the researchers also noted cultural differences in AI interactions. Since LLMs are trained on data from specific languages and regions, they may reflect the politeness norms of their training data.

How to get the best responses from AI

  • Use a moderate, natural level of politeness: Research suggests that balanced phrasing – not too abrupt or overly formal – produces the best results.
  • Don't overthink it: If you instinctively add “please” and “thank you,” there’s no harm in keeping them. But AI doesn’t require rigid politeness, so focus on clarity and context rather than etiquette.
  • Use politeness to reduce bias: Early research suggests that aggressive or loaded prompts can increase bias and factual errors in AI responses, while neutral, structured queries lead to more reliable outputs.
  • Be aware of global differences: Since LLMs are trained on different languages and cultures, politeness norms may vary across models.

Does politeness to AI really matter?

People have strong opinions about AI prompts. Some say they should be as direct as possible, while others prefer a human-like touch. Whether you should be polite or not is an ongoing debate with no clear yes or no answer – and it'll largely depend on the AI tool you're using.

However if, like me, you instinctively add pleases and thank yous, research suggests that’s not just harmless – it might actually help. Polite, well-structured prompts often lead to better responses, and in some cases, they may even reduce bias. That’s not just a bonus – it’s a critical factor in AI reliability.

As AI evolves, it will be fascinating to see whether politeness itself becomes a built-in feature. Could AI favor users who communicate respectfully? Will models be trained to respond differently based on etiquette?

For now, one thing is clear: how we interact with AI shapes how we interact with the world. Politeness isn’t just about getting better answers – it’s about reinforcing habits of clarity and respect in all our interactions.

You might also like

TOPICS
Becca Caddy

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality. 

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.