The Authentication Problem at Scale
When you manage one WordPress site, authentication is simple. You store the Application Password, make a REST API call, and move on. When you manage eighteen WordPress sites across different hosting providers, different server configurations, and different security plugins, authentication becomes the single biggest source of friction in your entire operation.
Every site has its own credentials. Every site has its own IP allowlist. Every site has its own rate limits. Every site has its own way of rejecting requests it does not like. I was spending more time debugging authentication failures than actually optimizing content.
The proxy solved all of it. One endpoint. One authentication layer. Eighteen sites behind it. The proxy handles credential routing, request formatting, error normalization, and retry logic. My agents talk to the proxy. The proxy talks to WordPress. The agents never touch WordPress directly.
How the Proxy Works
The proxy is a Cloud Run service deployed on GCP. It accepts REST API requests with custom headers that specify the target WordPress site, the API endpoint, and the authentication credentials. The proxy validates the request, authenticates with the target WordPress installation, forwards the request, and returns the response.
The authentication flow uses a proxy token for the first layer — proving that the request is coming from an authorized agent — and WordPress Application Passwords for the second layer — proving that the agent has permission to act on the specific site. Two layers of authentication, zero credential exposure in the agent code.
Every request is logged with the target site, the endpoint, the response code, and the execution time. This gives me a complete audit trail of every API call made to every site in the portfolio. When something fails, I can trace the exact request that caused it.
Why Not Just Use WordPress Multisite?
WordPress Multisite solves a different problem. It puts multiple sites on one installation, which creates a single point of failure and makes it nearly impossible to use different hosting environments for different sites. My portfolio includes sites on dedicated servers, shared hosting, managed WordPress hosting, and GCP Compute Engine. Multisite cannot span these environments. The proxy can.
The proxy also preserves site independence. Each WordPress installation is fully autonomous. It has its own plugins, its own theme, its own database. If one site goes down, the others are completely unaffected. The proxy is stateless — it does not store any WordPress data. It just routes traffic.
Security Architecture
The proxy runs on Cloud Run with no public ingress except the authenticated endpoint. The proxy token is a 256-bit hash that rotates on a schedule. WordPress credentials are passed per-request in encrypted headers — they are never stored on the proxy itself.
Rate limiting is built into the proxy layer. Each site gets a maximum request rate that prevents accidental DDoS of client WordPress installations. If an agent goes haywire and tries to make 500 requests per minute to a single site, the proxy throttles it before the requests ever reach WordPress.
The proxy also normalizes error responses. Different WordPress installations return errors in different formats depending on their server configuration and security plugins. The proxy catches these variations and returns a consistent error format to the agent, which simplifies error handling in every skill and pipeline that uses it.
The Credential Registry
Every site’s credentials live in a unified skill registry — a single document that maps site names to their WordPress URL, API user, Application Password, and any site-specific configuration. When a new site is onboarded, it gets a registry entry. When an agent needs to interact with a site, it pulls the credentials from the registry and passes them to the proxy.
This centralization is critical for credential rotation. When a site’s Application Password needs to change, I update one registry entry. Every agent, every pipeline, every skill that touches that site automatically uses the new credentials on the next request. No code changes. No deployment. One update, instant propagation.
Performance at Scale
Cloud Run auto-scales based on request volume. During a content swarm — when I am running optimization passes across all eighteen sites simultaneously — the proxy handles hundreds of concurrent requests without breaking a sweat. Cold start time is under two seconds, and warm instances handle requests in under 200 milliseconds of proxy overhead.
The total cost is remarkably low. Cloud Run charges per request and per compute second. At my volume — roughly 5,000 to 10,000 API calls per week — the proxy costs less than per month. That is the price of eliminating every authentication headache across eighteen WordPress sites.
What I Would Do Differently
If I were building the proxy from scratch today, I would add request caching for read operations. Many of my audit workflows fetch the same post data multiple times across different optimization passes. A short-lived cache at the proxy layer would cut API calls by 30 to 40 percent.
I would also add webhook support for real-time notifications when WordPress posts are updated outside my pipeline. Right now, the proxy is request-response only. Adding an event layer would enable reactive workflows that trigger automatically when content changes.
FAQ
Can the proxy work with WordPress.com hosted sites?
No. It requires self-hosted WordPress with REST API access and Application Password support, which means WordPress 5.6 or later.
What happens if the proxy goes down?
All API operations pause until the proxy recovers. Cloud Run has 99.95 percent uptime SLA, so this has not happened in production. The agents retry automatically.
How hard is it to add a new site to the proxy?
About five minutes. Add the credentials to the registry, verify the connection with a test request, and the site is live. No proxy code changes required.