ChatGPT 4o just got better, although I’m yet to notice a difference

ChatGPT logo
(Image credit: ilgmyzin/Unsplash)

  • OpenAI announced a new update for ChatGPT-4o
  • The update has improved the model's capabilities making it 'more intuitive, creative, and collaborative."
  • This rounds out a crazy week for ChatGPT following the launch of native 4o image generation

Another day, another ChatGPT update. This time, OpenAI has improved its GPT-4o model, making it 'more intuitive, creative, and collaborative.'

What does that mean exactly? The updates release notes highlight three major changes, that may or may not prove to be important depending on how you use ChatGPT.

The first improvement is in problem-solving, where 4o is now smarter when it comes to STEM and coding problems. OpenAI says GPT-4o now 'generates cleaner, simpler frontend code, more accurately thinks through existing code to identify necessary changes, and consistently produces coding outputs that successfully compile and run, streamlining your coding workflows.'

The second improvement is in GPT-4o's ability to follow instructions as well as accuracy related to formatting. With this change, ChatGPT should be better at following your prompts with multiple requests - think do X, Y, and Z, then complete C.

The third and final improvement is probably the most interesting one, as OpenAI says this update will make the model 'better understand the implied intent behind their prompts, especially when it comes to creative and collaborative tasks.'

OpenAI claims you'll see fewer emojis as 4o is now more concise and clear, creating responses that are 'easier to read, less cluttered, and more focused.'

ChatGPT's "fuzzy" improvements

In AI, "fuzzy logic" is the capability of a model to understand and handle ambiguity and the implied intent behind a prompt.

I tested the GPT-4o update with a few simple prompts, and while I was yet to see any clear improvement over the previous model the benchmark results don't lie.

From initial testing, it looks like this update has indeed improved ChatGPT's ability to deal with ambiguity, with one user on X sharing a Creative Writing Benchmark which sees GPT-4o overtaking DeepSeek R1.

In lmarena.ai, another benchmark used to determine an AI model's capabilities GPT-4o now surpasses GPT-4.5 sitting in the number 2 spot behind Google's newly launched Gemini 2.5 Pro.

I think most people won't notice a difference here, but it's good to see that ChatGPT is continuing to improve, and with OpenAI adding native image generation to 4o earlier this week, this is just the cherry on top.

What a crazy week for AI, surely next week can't top it, right? Right?

You might also like

TOPICS
John-Anthony Disotto
Senior Writer AI

John-Anthony Disotto is TechRadar's Senior Writer, AI, bringing you the latest news on, and comprehensive coverage of, tech's biggest buzzword. An expert on all things Apple, he was previously iMore's How To Editor, and has a monthly column in MacFormat. He's based in Edinburgh, Scotland, where he worked for Apple as a technician focused on iOS and iPhone repairs at the Genius Bar. John-Anthony has used the Apple ecosystem for over a decade, and is an award-winning journalist with years of experience in editorial.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.