OpenAI is augmenting ChatGPT with a recall capability, allowing the artificial intelligence assistant to draw upon users’ prior interactions across text, voice, and image generation to shape its present responses. The new memory function targets increased personalization and conversational efficiency by remembering context users have previously shared.
Initially, this update is accessible to ChatGPT Pro and Plus subscribers, although users in the U.K., EU, and EFTA countries (Iceland, Liechtenstein, Norway, Switzerland) will experience a delay due to pending regulatory evaluations. This development follows Google’s addition of a similar memory feature to its Gemini assistant earlier this year.
User Control and Memory Mechanics Explained
ChatGPT’s memory operates in two distinct ways, as detailed in OpenAI’s official FAQ. “Saved Memories” allow users to explicitly tell ChatGPT facts to remember, like preferences, or the AI might automatically save details it identifies as helpful during a chat. Users can view and delete these specific Saved Memories individually within settings.
Importantly, deleting a chat does not remove memories saved from it; they persist separately and require direct deletion. This system builds upon a previous capability introduced in 2024 that required users to explicitly direct memory actions.
The second mechanism involves ChatGPT implicitly referencing the user’s broader “Chat History” to inform its responses and improve conversations over time. While users can disable the referencing of chat history entirely (which OpenAI states leads to the deletion of this implicitly gathered information from its systems within 30 days), they cannot view or selectively delete the insights gleaned this way. Turning off “Saved Memories” also automatically disables the chat history referencing. Currently, free tier users only have access to the explicit “Saved Memories” function.
Starting today, memory in ChatGPT can now reference all of your past chats to provide more personalized responses, drawing on your preferences and interests to make it even more helpful for writing, getting advice, learning, and beyond. pic.twitter.com/s9BrWl94iY
— OpenAI (@OpenAI) April 10, 2025
For discrete conversations, a “Temporary Chat” option bypasses memory and model training, though OpenAI reserves the right to retain copies for up to 30 days for safety checks. Custom GPTs will continue to feature their own distinct memory systems, separate from the user’s main ChatGPT memory, if enabled by the GPT builder.
Toward a More Personalized AI Experience
The core goal of the memory feature, according to OpenAI, is to eliminate the friction of repeating context, making interactions feel more continuous and natural. This addresses a common user complaint, particularly for complex, multi-session projects.
It integrates with ChatGPT’s existing modalities: for voice interactions, which saw web availability and flow improvements in late March 2025, memory allows the AI to maintain conversational threads. Similarly, for the integrated GPT-4o image generation feature, memory could potentially recall stylistic preferences or subjects.
OpenAI acknowledges the privacy implications, stating efforts are underway to prevent the AI from proactively retaining sensitive information unless instructed and confirming that Enterprise and Team user data is excluded from training models. Framing the feature’s longer-term direction, Altman stated on X: “This is a surprisingly great feature imo, and it points at something we are excited about: ai systems that get to know you over your life, and become extremely useful and personalized.”
we have greatly improved memory in chatgpt–it can now reference all your past conversations!
— Sam Altman (@sama) April 10, 2025
this is a surprisingly great feature imo, and it points at something we are excited about: ai systems that get to know you over your life, and become extremely useful and personalized.
Rollout Context and Capacity Considerations
The initial deployment focuses on paying subscribers, potentially reflecting the resource demands of continuous memory referencing. While OpenAI cited regulatory reviews for the current regional limitations, CEO Sam Altman also acknowledged potential operational friction earlier in April, posting on X that users “should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges.”
This comes amid reports, including code discoveries in ChatGPT’s web interface noted April 10th, suggesting OpenAI may soon release new models like o3, o4-mini, and a potential GPT-4.1.
Altman confirmed on April 4th a “Change of plans,” delaying the anticipated GPT-5 launch “a few months” to prioritize releasing the o3 and o4-mini reasoning models first. This reversed a plan from February to merge o3 into GPT-5, with Altman citing a decision to “decouple reasoning models and chat/completion models.”