I’ve found the interaction between memory and sycophancy to be a major issue. I was trying to help deal with a difficult and confusing medical issue, and based on some lab tests, GPT had determined it was likely condition X. I had also asked a lot of questions about the treatment of condition X. After some follow up tests, it was clear that it wasn’t condition X, but it was extremely difficult to get ChatGPT to give up on it, even with deleting old chats and starting new ones, even changing the name of the patient. Starting in a private chat worked, but then it gets deleted when you close the window.
Ive had gpt5 respond directly to prompts from months old chats that are totally unrelated. It also sometimes completely forgets the current context and replies to a prompt as though it is the first one in the chat. Started seeing this on gpt5 release day.
Settings > Personalization > Reference Chat History
I’ve found the interaction between memory and sycophancy to be a major issue. I was trying to help deal with a difficult and confusing medical issue, and based on some lab tests, GPT had determined it was likely condition X. I had also asked a lot of questions about the treatment of condition X. After some follow up tests, it was clear that it wasn’t condition X, but it was extremely difficult to get ChatGPT to give up on it, even with deleting old chats and starting new ones, even changing the name of the patient. Starting in a private chat worked, but then it gets deleted when you close the window.
Just to clarify: This isn’t about memory being “on” or toggled in settings.
This was a controlled test involving GPT-5 and GPT-4o, which are supposed to have completely separate memory contexts per OpenAI’s documentation.
I typed a unique phrase into GPT-5, then deleted the chat.
I opened a new GPT-4o chat, with no shared history, and asked a vague follow-up.
GPT-4o quoted the exact phrase back.
In another test, it referenced material from an entirely different GPT-5 chat, never typed into 4o at all.
So this isn’t memory acting strange—it’s GPT-4o accessing GPT-5-only content, even from deleted sessions.
According to OpenAI:
“Chats with GPT-4o don’t currently use memory” and “Memory is unique to each model.”
If this is replicable, it’s not just a quirk—it’s a model boundary violation.
Happy to clarify details if others want to try reproducing.
https://help.openai.com/en/articles/8590148-memory-faq
Did you disable Memory[1]?
[1] https://help.openai.com/en/articles/8590148-memory-faq
Ive had gpt5 respond directly to prompts from months old chats that are totally unrelated. It also sometimes completely forgets the current context and replies to a prompt as though it is the first one in the chat. Started seeing this on gpt5 release day.