The question I get most often from restoration contractors who’ve seen what we build is some version of: how is this possible with one person?
Twenty-seven WordPress sites. Hundreds of articles published monthly. Featured images generated and uploaded at scale. Social media content drafted across a dozen brands. SEO, schema, internal linking, taxonomy — all of it maintained, all of it moving.
The answer is an architecture I’ve come to call Split Brain. It’s not a software product. It’s a division of cognitive labor between two types of intelligence — one optimized for live strategic thinking, one optimized for high-volume execution — and getting that division right is what makes the whole system possible.
The Two Brains
The Split Brain architecture has two sides.
The first side is Claude — Anthropic’s AI — running in a live conversational session. This is where strategy happens. Where a new content angle gets developed, interrogated, and refined. Where a client site gets analyzed and a priority sequence gets built. Where the judgment calls live: what to write, why, for whom, in what order, with what framing. Claude is the thinking partner, the editorial director, the strategist who can hold the full context of a client’s competitive situation and make nuanced recommendations in real time.
The second side is Google Cloud Platform — specifically Vertex AI running Gemini models, backed by Cloud Run services, Cloud Storage, and BigQuery. This is where execution happens at volume. Bulk article generation. Batch API calls that cut cost in half for non-time-sensitive work. Image generation through Vertex AI’s Imagen. Automated publishing pipelines that can push fifty articles to a WordPress site while I’m working on something else entirely.
The two sides don’t do the same things. That’s the whole point.
Why Splitting the Work Matters
The instinct when you first encounter powerful AI tools is to use one thing for everything. Pick a model, run everything through it, see what happens.
This produces mediocre results at high cost. The same model that’s excellent for developing a nuanced content strategy is overkill for generating fifty FAQ schema blocks. The same model that’s fast and cheap for taxonomy cleanup is inadequate for long-form strategic analysis. Using a single tool indiscriminately means you’re either overpaying for bulk work or under-resourcing the work that actually requires judgment.
The Split Brain architecture routes work to the right tool for the job:
- Haiku (fast, cheap, reliable): taxonomy assignment, meta description generation, schema markup, social media volume, AEO FAQ blocks — anything where the pattern is clear and the output is structured
- Sonnet (balanced): content briefs, GEO optimization, article expansion, flagship social posts — work that requires more nuance than pure pattern-matching but doesn’t need the full strategic layer
- Opus / Claude live session: long-form strategy, client analysis, editorial decisions, anything where the output depends on holding complex context and making judgment calls
- Batch API: any job over twenty articles that isn’t time-sensitive — fifty percent cost reduction, same quality, runs in the background
The model routing isn’t arbitrary. It was validated empirically across dozens of content sprints before it became the default. The wrong routing is expensive, slow, or both.
WordPress as the Database Layer
Most WordPress management tools treat the CMS as a front-end interface — you log in, click around, make changes manually. That mental model caps your throughput at whatever a human can do through a browser in a workday.
In the Split Brain architecture, WordPress is a database. Every site exposes a REST API. Every content operation — publishing, updating, taxonomy assignment, schema injection, internal link modification — happens programmatically via direct API calls, not through the admin UI.
This changes the throughput ceiling entirely. Publishing twenty articles through the WordPress admin takes most of a day. Publishing twenty articles via the REST API, with all metadata, categories, tags, schema, and featured images attached, takes minutes. The human time is in the strategy and quality review — not in the clicking.
Twenty-seven sites across different hosting environments required solving the routing problem: some sites on WP Engine behind Cloudflare, one on SiteGround with strict IP rules, several on GCP Compute Engine. The solution is a Cloud Run proxy that handles authentication and routing for the entire network, with a dedicated publisher service for the one site that blocks all external traffic. The infrastructure complexity is solved once and then invisible.
Notion as the Human Layer
A system that runs at this velocity generates a lot of state: what was published where, what’s scheduled, what’s in draft, what tasks are pending, which sites have been audited recently, which content clusters are complete and which have gaps.
Notion is where all of that state lives in human-readable form. Not as a project management tool in the traditional sense — as an operating system. Six relational databases covering entities, contacts, revenue pipeline, actions, content pipeline, and a knowledge lab. Automated agents that triage new tasks, flag stale work, surface content gaps, and compile weekly briefings without being asked.
The architecture means I’m never managing the system — the system manages itself, and I review what it surfaces. The weekly synthesizer produces an executive briefing every Sunday. The triage agent routes new items to priority queues automatically. The content guardian flags anything that’s close to a publish deadline and not yet in scheduled state.
Human attention goes to decisions, not to administration.
What This Looks Like in Practice
A typical content sprint for a client site starts with a live Claude session: what does this site need, in what order, targeting which keywords, with what persona in mind. That session produces a structured brief — JSON, not prose — that seeds everything downstream.
The brief goes to GCP. Gemini generates the articles. Imagen generates the featured images. The batch publisher pushes everything to WordPress with full metadata attached. The social layer picks up the published URLs and drafts platform-specific posts for each piece. The internal link scanner identifies connections to existing content and queues a linking pass.
My involvement during execution is monitoring, not doing. The doing is automated. The judgment — what to build, why, and whether the output clears the quality bar — stays with the human layer.
This is what makes the throughput possible. Not working harder or faster. Designing the system so that the parts that require human judgment get human judgment, and the parts that don’t get automated at whatever volume the infrastructure supports.
The Honest Constraints
The Split Brain architecture is not a magic box. It has real constraints worth naming.
Quality gates are essential. High-volume automated content production without rigorous pre-publish review produces high-volume errors. Every content sprint runs through a quality gate that checks for unsourced statistical claims, fabricated numbers, and anything that reads like the model invented a fact. This is non-negotiable — the efficiency gains from automation are worthless if they introduce errors that damage a client’s credibility.
Architecture decisions made early are expensive to change later. The taxonomy structure, the internal link architecture, the schema conventions — getting these right before publishing at scale is substantially easier than retrofitting them across hundreds of existing posts. The speed advantage of the system only compounds if the foundation is solid.
And the system requires maintenance. Models improve. APIs change. Hosting environments add new restrictions. What works today for routing traffic to a specific site may need adjustment next quarter. The infrastructure overhead is real, even if it’s substantially lower than managing a human team of equivalent output.
None of these constraints make the architecture less viable. They make it more important to design it deliberately — to understand what the system is doing, why each component is there, and what would break if any piece of it changed.
That’s the Split Brain. Two kinds of intelligence, clearly divided, doing the work each is actually suited for.
Tygart Media is built on this architecture. If you’re a service business thinking about what an AI-native content operation could look like for your vertical, the conversation starts with understanding what requires judgment and what doesn’t.
Leave a Reply