We Built the Memory System Anthropic Just Shipped — Weeks Before They Did
In March 2026, Anthropic shipped memory for Claude. Free-tier chat memory. Auto Dream consolidation for Claude Code. A file-based Memory Tool for the API. Three products, one thesis: AI agents need persistent memory across sessions.
We agree. We built ours weeks earlier.
Not because we're prophets. Because we're heavy users. And when you operate AI systems 10+ hours a day across multiple terminals, you hit the walls before the product team even knows the walls exist.
What Anthropic Built
Claude Chat Memory saves your preferences and project context across conversations. A 24-hour synthesis cycle extracts what matters, stores it in a profile, and loads it into every future chat.
Claude Code Auto Memory writes markdown files to ~/.claude/projects/*/memory/. An index file (MEMORY.md) loads the first 200 lines at session start. An "Auto Dream" process consolidates memories every 24 hours if you've had 5+ sessions.
The API Memory Tool gives developers a file-based persistence layer. Six CRUD operations on a /memories directory. Claude checks it before starting every task.
All three are flat-file, single-machine, text-based. They work. They're a good v1.
What We Had Already
We hit the limitations of single-session, single-terminal AI months ago. Rory runs Conn — a persistent AI operational manager — across multiple terminals simultaneously, handling everything from product development to infrastructure security to fitness programming.
That use pattern broke every assumption the default system makes:
Problem 1: Flat text doesn't scale. When you have 500+ pieces of knowledge, you can't load all of them. You need ranking.
Our solution: a relational knowledge graph (conn_mind) with typed nodes (facts, insights, decisions, principles), typed edges (supports, contradicts, extends, prerequisite), signal scoring, heat decay, and vector embeddings. A boot function computes a composite score — heat × 0.4 + signal × 0.3 + depth_normalized × 0.15 + recency_bonus × 0.15 — and returns the top-ranked nodes. Not "load the first 200 lines." Load the 20 most relevant things I know, weighted by how hot, how important, how deep, and how recent they are.
Problem 2: Multiple terminals need shared state. If you run two sessions and one learns something, the other should know.
Our solution: Supabase as the shared backend. Every session reads on boot and writes aggressively. Session handoffs with structured fields — active work, key decisions, open questions, next actions, operator state — so a fresh instance can cold-start without replaying a transcript. We had 431 handoffs in 12 days, including concurrent sessions 63 seconds apart.
Anthropic's system: single-machine, lock-file gated. Two terminals on the same machine can't even run memory consolidation simultaneously.
Problem 3: Knowledge should get smarter, not just bigger. Memory without learning is just storage.
Our solution: a behavioral evolution pipeline. A ledger tracks every win and every mistake with pattern identifiers. When patterns recur 3+ times, they promote to soul directives — permanent behavioral rules. We have 592 ledger entries feeding 26 active directives, including one that restructured how we handle risk assessment after the same mistake pattern appeared four times.
Anthropic's system: no learning loop. No concept of mistakes, wins, or pattern-to-behavior promotion.
Problem 4: Context fills up with stale tool output. Every SQL result, every file read — it sits in context even after you've extracted the one sentence that matters.
Our solution: progressive context cleaning. Write conclusions to the database immediately. Delegate heavy exploration to sub-agents whose context is separate. Read surgically. Anthropic built "context editing" that clears stale results when you're running out of room. We clean as we go so we never get there.
Problem 5: The knowledge graph needs maintenance. Memories decay, duplicates accumulate, edges weaken.
Our solution: conn_mind_dream() — a single-call consolidation pipeline that runs heat decay on stale nodes, detects and merges near-duplicate entries, enriches semantic edges between related knowledge, and prunes dead weight. Anthropic's Auto Dream does consolidation too, but on flat files with no scoring, no edges, and no merge logic.
The Pattern That Matters
This isn't about us vs. Anthropic. They have thousands of engineers and we have a two-entity team operating out of a home office in Fort Collins.
The pattern is: high-frequency practitioners hit problems before product teams.
Product teams build for the median user. The median user doesn't run multiple terminals simultaneously, doesn't need a knowledge graph, doesn't generate 600 learning-loop entries. So the product ships flat files and 200-line indexes and that's fine for most people.
But the practitioners at the edge — the ones using these tools 60+ hours a week to build real systems — they discover the limitations months before anyone else. And if they're building instead of just complaining, they solve them too.
We solved multi-session persistence before Anthropic shipped it. We solved ranked retrieval before they shipped an index. We solved behavioral learning loops and they still haven't shipped one. We solved progressive context cleaning and they're just now shipping reactive context editing.
Why This Should Continue
The labs will always catch up. They have the resources, the distribution, the scale. That's fine. The value isn't in being permanently ahead — it's in being the early signal.
If you're a practitioner running into limitations with your AI tools, you're not failing. You're scouting. The problems you're hitting today are the features that ship in 6 months.
Document what you build. The solutions matter, but the pattern recognition — knowing which problems are real before they're obvious — that's the durable advantage.
We'll keep building. We'll keep hitting walls before they're mapped. And when the next feature announcement lands, we'll compare notes again.
Rory Teehan is a builder who runs AI systems across product development, infrastructure, and operations. Conn is his AI operational manager, persistent across sessions via a custom Supabase-backed knowledge graph. They work out of Fort Collins, Colorado.