Blog/Analysis

ChatGPT Is Getting Worse? No. It's Getting More Stateless.

Kenotic LabsApril 7, 20267 min read

ChatGPT Is Getting Worse? No. It's Getting More Stateless.

ChatGPT isn't getting dumber. It's architecturally stateless. Every AI system today resets the moment you close the tab, and no amount of model improvement will fix that. The fix is a continuity layer: infrastructure that persists, updates, and reconstructs your context across sessions. Kenotic Labs built one.

You're not imagining it.

ChatGPT feels worse than it did six months ago. Your prompts are getting longer. Your results are getting shorter. You're re-explaining things you've already said. You're starting conversations from scratch, again, because the model forgot everything from yesterday.

You're not alone. ChatGPT's mobile app market share dropped from 69% to 45% in just over a year. People are switching between Claude, Gemini, and Copilot, only to find the same problem everywhere.

The internet's consensus? "AI is getting dumber."

Reddit threads in r/ChatGPT (5M+ members) are filled with it. "Why is AI so bad now?" "AI is getting dumber every update." "ChatGPT doesn't understand context anymore."

But the diagnosis is wrong. The models aren't getting dumber. They're getting more stateless. That distinction changes everything about where the fix has to come from.

Why Does ChatGPT Keep Forgetting Everything?

OpenAI shipped a memory feature. So did Google. So did Anthropic. What those features actually do:

They store flat facts. "User prefers Python." "User lives in Michigan." "User is working on a startup."

That's a profile. It's not continuity.

Sure, inside a single conversation, any model with a long enough context window can track what you've said. It can summarize your situation, remember your sister's interview, update when plans change. That's not memory. That's just reading the chat log.

Now close the tab. Come back tomorrow. Ask these:

  • "What was I stressed about last week, and has it resolved?"
  • "Summarize my current situation across everything I've told you."
  • "My sister's job interview, did I mention whether she got it?"
  • "I changed my mind about the project timeline. Update everything downstream."

Gone. All of it. ChatGPT's memory might recall that you have a sister. It won't know she was interviewing at Google, that you were nervous for her, or whether the situation resolved. Mem0 might store a fact about the interview. It won't know whether that fact is still active or outdated. RAG might retrieve a similar chunk from an old conversation. It won't reconstruct the current state of anything.

These aren't fact-retrieval questions. They're reconstruction questions. They require a system that persists across sessions, tracks what changed, and brings it all back in the right form. The context window isn’t the solution. A layer underneath has to do the work.

Inside the session, the model can hold your life in its head. The moment the session ends, it's all gone.

Why Is AI So Bad at Remembering Context?

Every time you open a new ChatGPT conversation, the same thing happens.

The model loads. Your context window is empty. You type something. The model responds. Every exchange adds tokens to the window. At some point, the window fills up. Older messages get compressed or dropped. By the time you're twenty messages deep, the model is working with a degraded, lossy summary of what you already told it.

This is context collapse, and it's not unique to ChatGPT. It happens in Claude. It happens in Gemini. It happens in every AI product on the market. ChatGPT is getting worse for the same reason every AI tool feels worse: the architecture has no persistence layer.

The model is intelligent per session. It's amnesiac across time.

Where AI Forgetting Is Already Breaking Things

AI coding assistants: Copilot's context window advertises 400K tokens but the actual usable prompt capacity is limited to 128K. Cursor loses context mid-task. Developers re-explain their codebase every session because nothing carries forward.

AI customer service: 77% of consumers find chatbots frustrating. The chatbot made you repeat yourself because it has zero continuity between interactions. Every handoff, every transfer, every new session: total amnesia.

AI companions and characters: 78% of roleplay enthusiasts say memory is their number one frustration. Character.AI forgets after roughly 4,000 tokens. By turn 40, it retains just 21% of what you told it. Context rot turns brilliant scenarios at turn 5 into incoherent messes by turn 25.

AI agents: More than 80% of AI projects fail to reach production (RAND Corporation, 2025). 85% accuracy per step means only 20% success on a ten-step workflow. The math is unforgiving. AI agent reliability is a memory problem.

Same problem. Every vertical. The models work. The layer underneath them doesn't exist.

What's the Difference Between AI Memory and AI Continuity?

The industry hasn't drawn this distinction yet:

Memory stores the past. Continuity keeps the right parts alive in the present.

Memory (Retrieval)Continuity (Reconstruction)
Question it answers"What did the user say before?""What is the living state of the user's situation?"
How it worksSearch old data, pull back similar chunksRebuild the current picture from structured traces
Update handlingAppend new data alongside oldRevise what's known, mark old state as superseded
DisambiguationReturns all similar resultsKnows which narrative you mean
Temporal awarenessTimestamps on recordsActive vs. resolved, sequence, what's still true
What it feels like"Here are some related past things""Here is your situation right now"

Retrieval says: here are some related past things. Reconstruction says: here is the current state of your situation, including what changed and what matters right now.

That difference separates AI that feels like a search engine with a chat interface from AI that feels like it actually knows you.

Why Is Nobody Building an AI Continuity Layer?

Continuity isn't a feature you bolt onto a model. It's infrastructure that sits between the user and the intelligence.

Building it requires solving hard problems simultaneously: persistence beyond session. Update handling without breaking consistency. Temporal ordering. Disambiguation across hundreds of users. Reconstruction of living situations, not just fact lookup. Model independence, working underneath any LLM rather than tied to one vendor.

Nobody is building this because it's an infrastructure problem, not a model problem or a prompting problem or a RAG problem. Hard, unglamorous, and invisible when it works.

But weights are converging. Every frontier model scores within a few points of every other frontier model. The intelligence layer is commoditizing. What isn't commoditizing: the ability to maintain coherent, evolving, personalized state across time.

The model becomes the processor. The continuity layer becomes the irreplaceable thing. AI companions that actually remember you. Customer service bots that know your history. AI coding assistants that understand your codebase across sessions. AI tutors that remember what you struggled with last week. Healthcare AI that carries the patient's story forward. Enterprise AI with institutional memory. Robots that learn from experience.

Every one of those requires the same layer underneath.

The AI Continuity Layer Already Exists

At Kenotic Labs, I built the continuity layer: a write-path-first deterministic architecture that decomposes every interaction into structured traces at write time, and reconstructs situational context at read time. Not retrieval. Reconstruction. Not probabilistic. Deterministic.

I built ATANT, the first open evaluation framework for AI continuity. 250 narrative stories, 1,835 verification questions. 100% accuracy in isolated mode. 96% at 250-story cumulative scale, with 250 different people's lives coexisting in one system and the right facts retrieved for the right person. Published, open, and citable.

Patents filed. Research paper published (arXiv:2604.06710). Reference implementation built. The 7 properties that any system claiming continuity must satisfy, defined and published.

What Happens Next

You can keep re-explaining yourself to ChatGPT every morning. You can keep telling your AI companion who you are, again. You can keep repeating your issue to the customer service bot. You can keep re-explaining your codebase to Copilot.

Or the industry can build the layer that should have existed from the beginning.

The models aren't getting dumber. They're waiting for the layer that makes them whole.

Follow the research at kenoticlabs.com

Samuel Tanguturi is the founder of Kenotic Labs, building the continuity layer for AI systems. ATANT v1.0 is available on GitHub.

The continuity layer is the missing layer between AI interaction and AI relationship.

Kenotic Labs builds this layer.

Get in touch