The Context Stack: How I Give Claude Memory Across 27 Sites and 6 Businesses

The most common question I get from people who read the Split-Brain Architecture piece is some version of: how does Claude actually know what it’s working on? If you are managing 27 sites, 6 businesses, and hundreds of ongoing tasks, how do you avoid spending the first ten minutes of every session re-explaining your entire operation to an AI that has no memory of yesterday?

The answer is what I call the Context Stack. It is not a single file or a single tool — it is a layered system where each layer handles a different time horizon of memory, and Claude reads exactly what it needs for the task at hand without being overwhelmed by everything else.

The Problem With AI Memory

Claude does not have persistent memory across sessions by default. Every conversation starts blank. For someone running a simple use case — drafting an email, summarizing a document — this is fine. For someone running a content network across 27 WordPress sites with different brand voices, different SEO strategies, different clients, and different publishing schedules, a blank slate every session is an operational catastrophe.

The naive solution is to paste a giant context document at the start of every conversation. I tried this. It doesn’t work. Not because Claude can’t read it — it can — but because a 5,000-word context dump at the start of every session is cognitively expensive for the human, slows down the first response, and buries the relevant information under a pile of irrelevant information.

The right solution is a stack: different layers of context loaded at different times, for different purposes.

Layer One — The Global Layer (Always Loaded)

The global layer is the context that is true across everything I do, all the time. It lives in a CLAUDE.md file at the workspace root and in a persistent system prompt inside Claude’s project settings.

What goes here: my name, my email, the fact that I manage a network of WordPress sites, the Notion workspace structure, the proxy URL and authentication pattern for WordPress API calls, and a handful of behavioral rules that apply universally — brevity preferences, how I want work logged, what “done” means to me.

What does not go here: anything site-specific, client-specific, or task-specific. The global layer is 200 lines maximum. Anthropic’s own guidance on CLAUDE.md length is right — longer files reduce adherence. I treat the 200-line limit as a hard constraint, not a guideline.

Layer Two — The Site Layer (Loaded Per Project)

Each WordPress site I manage has its own Claude Project, and each project has its own knowledge files. These files contain everything Claude needs to work on that specific site without me having to explain it: the brand voice, the target audience, the top-performing content, the internal linking structure, the credentials, the publishing cadence, and the current content roadmap.

I generate these files programmatically when I onboard a new site. They pull from the WordPress REST API, the site’s GA4 data, and the Notion database for that client. A site knowledge file for an established site runs about 800–1,200 words. Claude reads it at the start of any session for that project and immediately knows the difference between how to write for a Houston restoration contractor versus a New York luxury lender.

The site layer is why I can switch from working on a restoration contractor to a luxury lender to a live comedy platform in the same afternoon without losing context. The context travels with the project, not with me.

Layer Three — The Task Layer (Loaded On Demand)

The task layer is ephemeral. It is the specific context for the thing I am doing right now: the article brief, the GA data from this session, the list of posts that need refreshing, the client’s feedback on last week’s content.

This layer lives nowhere permanent. I paste it into the conversation, Claude uses it, and when the session ends it is gone. The task layer is intentionally disposable. If it matters beyond this session, it gets promoted to the site layer or the global layer. If it doesn’t matter beyond this session, it doesn’t need to be stored.

Most AI users try to make everything permanent. The discipline of the context stack is knowing what deserves permanence and what doesn’t.

Layer Four — The Second Brain (Asynchronous)

The second brain layer is Notion. It is not loaded into Claude’s context window directly — it is queried via the Notion MCP when Claude needs specific information.

What lives here: every session log, every publish log, every piece of competitive intelligence, every client preference that has emerged over time, the Promotion Ledger for autonomous behaviors, the Second Brain database of extracted knowledge from prior sessions.

The key distinction: Notion is not context I push into Claude. It is context Claude pulls from Notion when it needs it. The MCP connection means Claude can search the Second Brain mid-session, find a relevant prior session log, and use it — without me having to remember that the prior session happened.

This is the layer that makes the system feel like it has long-term memory even though it doesn’t. Claude doesn’t remember. But it can look things up, and the things worth looking up are stored.

What This Looks Like In Practice

A typical session for me starts with a project context already loaded (site layer). Within thirty seconds Claude knows which site it’s working on, what voice to use, and what the current priorities are. I drop in the task layer — a GA report, a list of post IDs, a brief — and we are working within two minutes of starting.

When something important happens — a new client preference, a site credential change, a strategy decision — I say “log this to Notion” and Claude writes it to the Second Brain. I don’t maintain the second brain manually. Claude maintains it as a byproduct of doing the work.

When I need to recall something from months ago — what we decided about the internal linking structure for a specific site, what the client said about their brand voice in March — Claude searches Notion and finds it. The retrieval is imperfect but it is dramatically better than my own memory.

The Honest Constraints

This system took months to build and it is still not finished. The site knowledge files need updating when strategies change and I don’t always remember to update them. The Second Brain has gaps where sessions weren’t logged properly. The global CLAUDE.md drifts toward bloat and needs periodic pruning.

The bigger constraint is that this architecture assumes you are operating at a certain scale — multiple sites, multiple clients, recurring workflows. If you are running one site for one business, the overhead of building and maintaining this stack is probably not worth it. A well-written CLAUDE.md and a single Notion page of context will get you most of the way there.

But if you are scaling past three or four sites, or if you find yourself re-explaining the same context in every session, the stack pays for itself quickly. The ten minutes you spend building a site knowledge file saves you two minutes per session indefinitely.

The goal is not to give Claude everything. The goal is to give Claude exactly what it needs, when it needs it, at the right layer of permanence.

Building Your Own Context Stack?

Email me what you are managing and I will tell you which layers you actually need.

Most people over-engineer the global layer and under-invest in the site layer. Five minutes of conversation usually fixes it.

Email Will → will@tygartmedia.com

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *