ChatGPT's memory upgrade might just be the biggest AI improvement we see all year

An iPhone showing the ChatGPT logo on its screen
(Image credit: ChatGPT)

OpenAI just rolled out a major memory upgrade for ChatGPT. Though subtle in a way, I think it could mark a significant shift in how people engage with AI, certainly in the long term.

Before now, ChatGPT's memory was limited to the current session unless ChatGPT decided some bit of it should be added to long-term memory or if you manually did so. Otherwise, every new conversation was a clean slate.

Now, ChatGPT can pull from your entire chat history across every session to respond to your latest query. It knows your vibe and can track your projects. It will remember things from your discussions even if you might have forgotten.

It still has the user-saved memory that you deliberately ask it to store, but now, every little comment and question will also be part of how ChatGPT processes conversations with you, like a polite robot intern who’s secretly keeping a journal. If you want to find out what ChatGPT's image of you is, you can just ask it to "Describe me based on all our chats."

You might not think this is such a big change, but as someone who's become a regular user of ChatGPT, I can easily imagine how it will benefit me. When I ask for a recipe idea, ChatGPT will now pull up previous recipes it's provided and ask if I liked the result, coming up with new meal ideas based on my opinion of the earlier one.

The same goes for brainstorming bedtime story ideas. I almost never want to write one entirely, but I do get some inspiration from the premises ChatGPT suggests, and now it will be better at riffing on suggestions I've said before.

While new features and improvements to AI chatbots can sometimes feel like a lot of noise for something that isn't that big a deal, persistent memory feels like real progress just by being a feature built for the long term. Maintaining context across interactions makes it easier for the overall 'relationship' to feel more meaningful.

It also opens the door to new use cases. Imagine tutoring that adapts to your learning style across weeks. Or therapy journaling with an AI that remembers what you said three sessions ago. Or productivity planning that doesn’t need to be re-explained every Monday morning. You don’t need the AI to be sentient as long as it's consistent.

Memorable moves

ChatGPT's memory improvement isn't without complications, though. Having an AI remember you across time inevitably raises questions about privacy, autonomy, and, frankly, how much information you want your AI companion to have.

Yes, it’s helpful that it remembers you’re kosher and like a bit of spice in your dishes, but you don't want it to assume too much.

This is pretty specific to just me, but I do a lot of tests of ChatGPT and its features, and not every test is built around my real life. I'm not traveling to Japan next week; I just wanted to see how ChatGPT would do at devising an itinerary. I then have to either delete that session or explain to the AI that it shouldn't use that question when formulating answers to other questions.

There’s also a philosophical element. The more AI mimics memory, the easier it becomes to anthropomorphize. If it remembers your favorite sports team, your pet’s name, or your dislike of semicolons, it starts to feel like a person, and it's vital to not ascribe self-awareness to an algorithm that is far from attaining it. It’s easy to trust a tool that remembers you. Maybe too easy in this case.

Nonetheless, for good or ill, I maintain that ChatGPT's comprehensive memory is one of the most consequential AI upgrades this year so far and will likely still be so when 2025 is over.

Memory is a potent trick, even if it doesn't let you make a Ghibli Studio version of yourself. Memory is the thing that turns an inert tool into a long-term assistant. Even if your assistant is just a digital emulation of a brain floating in a cloud, it's nice that it will remember the little things.

You might also like

TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.