The Problem Nobody Talks About
Managing one WordPress site is straightforward. Managing sixteen is a logistics nightmare — unless you build the infrastructure to treat them as a single organism. That is exactly what I did, and every week I run what I call a content swarm: a coordinated optimization pass across every site in the portfolio, from a cold storage facility in Madera to a luxury lending platform in Beverly Hills.
The swarm is not a metaphor. It is a literal sequence of automated audits, content refreshes, taxonomy fixes, schema injections, and interlink passes that hit every site in rotation. The output is a stack of site-specific optimization reports that tell me exactly what changed, what improved, and what needs human attention.
The Architecture Behind the Swarm
Every site connects through a single Cloud Run proxy on GCP. One endpoint, one authentication layer, eighteen different WordPress installations behind it. The proxy handles credential routing, rate limiting, and error logging. No site talks directly to the internet during optimization — everything flows through the proxy.
Each site has a registered credential set in a unified skill registry. When the swarm kicks off, it pulls the site list, authenticates through the proxy, and begins the audit sequence. The sequence is always the same: fetch all posts, score content health, identify thin pages, check taxonomy coverage, verify schema markup, scan internal links, and flag orphan pages.
The results land in Notion. Every site gets its own optimization log entry with post-level detail. I can see at a glance which sites are healthy, which need content, and which have technical debt piling up.
What a Typical Swarm Week Looks Like
Monday: trigger the audit across all sixteen sites. The agent pulls every published post, scores it against the SEO/AEO/GEO framework, and generates a prioritized action list. By Monday afternoon, I have sixteen reports sitting in Notion.
Tuesday through Thursday: execute the highest-priority actions. This might mean running a full refresh on ten posts across three sites, injecting FAQ schema on twenty pages, or publishing a batch of new articles to fill content gaps. The agent handles the execution. I handle the editorial judgment calls.
Friday: verification pass. Re-audit the sites that received changes, compare before-and-after scores, and log the delta. This closes the loop and gives me a week-over-week trend line for every property in the portfolio.
Why Most Agencies Cannot Do This
The barrier is not talent. It is infrastructure. Most agencies manage sites one at a time, with separate logins, separate dashboards, separate reporting tools. They context-switch between properties all day and lose hours to authentication friction alone.
The swarm model eliminates context switching entirely. One command center, one proxy, one agent, sixteen sites. The agent does not care whether it is optimizing a restoration company or a comedy streaming platform. It follows the same protocol, applies the same standards, and logs to the same database.
This is what scalable content operations actually looks like. Not more people. Not more tools. One system that treats every site as a node in a network.
The Sites in the Swarm
The portfolio spans wildly different verticals: disaster restoration, luxury asset lending, cold storage logistics, comedy entertainment, automotive training, storm damage mitigation, interior design, and more. Each site has its own content strategy, its own keyword targets, its own audience. But the optimization infrastructure is identical across all of them.
That uniformity is the competitive advantage. When I discover a new optimization technique — say, a better way to structure FAQ schema for voice search — I can deploy it across all sixteen sites in a single session. The improvement compounds across the entire portfolio simultaneously.
The Compounding Effect
After twelve weeks of swarm cycles, the aggregate improvement is staggering. Posts that were thin get expanded. Orphan pages get linked. Schema coverage goes from patchy to comprehensive. Internal link density increases across every site. And because every change is logged, I can trace the exact moment each improvement was made and correlate it with traffic changes.
This is not a one-time audit. It is an operating rhythm. The swarm runs every week whether I feel like it or not, because the system does not depend on my motivation. It depends on my infrastructure.
FAQ
How long does a full swarm take?
The automated audit across all sixteen sites completes in under two hours. Execution of the priority actions takes the rest of the week, depending on volume.
Do you use the same optimization standards for every site?
Yes. The SEO, AEO, and GEO framework is universal. What changes is the content strategy and keyword targeting, which are site-specific.
Can this approach work for smaller portfolios?
Absolutely. The infrastructure scales down just as easily. Even managing three sites through a unified proxy and command center eliminates massive inefficiency.