Tag: Google Cloud

  • The Site Factory: How One GCP Instance Runs 23 WordPress Sites With AI on Autopilot

    The Site Factory: How One GCP Instance Runs 23 WordPress Sites With AI on Autopilot

    The Machine Room · Under the Hood

    TL;DR: We replaced 100+ isolated Cloud Run services with a single Compute Engine VM running 23 WordPress sites, a unified Content Engine, and autonomous AI workflows — cutting hosting costs to $15-25/site/month while launching new client sites in under 10 minutes.

    The Problem With One Site, One Stack

    When we started managing WordPress sites for clients at Tygart Media, each site got its own infrastructure: a Cloud Run container, its own database, its own AI pipeline, its own monitoring. At 5 sites, this was manageable. At 15, it was expensive. At 23, it was architecturally insane — over 100 Cloud Run services spinning up and down, each billing independently, each requiring separate deployments and credential management.

    The monthly infrastructure cost was approaching $2,000 for what amounted to medium-traffic WordPress sites. The cognitive overhead was worse: updating a single AI optimization skill meant deploying it 23 times.

    So we built the Site Factory.

    Three-Layer Architecture

    The Site Factory runs on a three-layer model that separates shared infrastructure from per-site WordPress instances and AI operations.

    Layer 1: Shared Platform (GCP). A single Compute Engine VM hosts all 23 WordPress installations with a shared MySQL instance and a centralized BigQuery data warehouse. A single Content Engine — one Cloud Run service — handles all AI-powered content operations across every site. A Site Registry in BigQuery maps every site to its credentials, hosting configuration, and optimization schedule.

    Layer 2: Per-Site WordPress. Each WordPress installation lives in its own directory on the VM with its own database. They share the same PHP runtime, Nginx configuration, and SSL certificates, but their content and configurations are completely isolated. Hosting cost per site: $15-25/month, compared to $80-150/month on containerized Cloud Run.

    Layer 3: Claude Operations. This is where the Expert-in-the-Loop architecture meets WordPress at scale. Routine operations — SEO scoring, schema injection, internal linking audits, AEO refreshes — run autonomously via Cloud Scheduler. Strategic operations — content strategy, complex article writing, taxonomy redesign — route to an interactive AI session where Claude operates as a system administrator with full context about every site in the registry.

    The Model Router

    Not every AI task requires the same model. Schema injection? Haiku handles it in 2 seconds at $0.001. A nuanced 2,000-word article on luxury asset lending? That’s Opus territory. SERP data extraction? Gemini is faster and cheaper.

    The Model Router is a centralized Cloud Run service that accepts task requests and dynamically routes them to the cheapest capable model on Vertex AI. It evaluates task complexity, required output length, and domain specificity, then selects the optimal model. This alone cut our AI compute costs by 40% compared to routing everything through a single frontier model.

    10-Minute Site Launch

    Adding a new client site to the factory takes 5 configuration steps and under 10 minutes:

    Register the domain and SSL certificate in Nginx. Create the WordPress database and installation directory. Add the site to the BigQuery Site Registry with credentials and vertical classification. Run the initial site audit to establish a content baseline. Enable the autonomous optimization schedule.

    From that point, the site receives the same AI optimization pipeline as every other site in the factory: daily content scoring, weekly SEO/AEO refreshes, monthly schema audits, and continuous internal linking optimization. No additional infrastructure. No new Cloud Run services. No incremental hosting cost beyond the shared VM allocation.

    Self-Healing Loop

    At 23 sites, things break. APIs rate-limit. WordPress plugins conflict. SSL certificates expire. The Self-Healing Loop monitors every site and every API endpoint continuously.

    When a WordPress REST API call fails, the system retries with exponential backoff. If the failure persists, it falls back to WP-CLI over SSH. If the site is completely unreachable, it triggers a Slack alert to the operations channel and pauses that site’s optimization schedule until the issue is resolved.

    For AI model failures, the Model Router implements automatic fallback: if Opus returns a 429 (rate limited), the task routes to Sonnet. If Sonnet fails, it queues for batch processing overnight at reduced rates. No task is ever dropped — only deferred.

    Cross-Site Intelligence

    The real power of the Site Factory isn’t cost reduction — it’s the intelligence layer that emerges when 23 sites share a single data warehouse. BigQuery holds content performance data, keyword rankings, schema coverage, and information density scores for every post on every site.

    This enables cross-site pattern recognition that’s impossible when sites operate in isolation. When an article format performs well on one site, the system can identify similar opportunities across all 22 other sites. When a keyword strategy drives organic growth in one vertical, the Content Engine can adapt that strategy for adjacent verticals automatically.

    The Site Factory isn’t a hosting solution. It’s an operating system for AI-powered content operations — one that gets smarter with every site we add.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “The Site Factory: How One GCP Instance Runs 23 WordPress Sites With AI on Autopilot”,
    “description”: “One GCP Compute Engine VM, 23 WordPress sites, autonomous AI optimization, $15-25/site/month hosting costs, and new client sites launching in under 10 minutes. “,
    “datePublished”: “2026-03-30”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/the-site-factory-how-one-gcp-instance-runs-23-wordpress-sites-with-ai-on-autopilot/”
    }
    }

  • Service Account Keys, Vertex AI, and the GCP Fortress

    Service Account Keys, Vertex AI, and the GCP Fortress

    The Machine Room · Under the Hood

    For regulated verticals (HIPAA, financial services, legal), we build isolated AI infrastructure on Google Cloud using service accounts, VPCs, and restricted APIs. This gives us Vertex AI and Claude capabilities without compromising data isolation or compliance requirements.

    The Compliance Problem
    Some clients operate in verticals where data can’t flow through public APIs. A healthcare client can’t send patient information to Claude’s public API. A financial services client can’t route transaction data through external language models.

    But they still want AI capabilities: document analysis, content generation, data extraction, automation.

    The solution: isolated GCP infrastructure that clients own, that uses service accounts with restricted permissions, and that keeps data inside their VPC.

    The Architecture
    For each regulated client, we build:

    1. Isolated GCP Project
    Their own Google Cloud project, separate billing, separate service accounts, zero shared infrastructure with other clients.

    2. Service Account with Minimal Permissions
    A service account that can only:
    – Call Vertex AI APIs (nothing else)
    – Write to their specific Cloud Storage bucket
    – Log to their Cloud Logging instance
    – No ability to access other projects, no IAM changes, no network modifications

    3. Private VPC
    All Vertex AI calls happen inside their VPC. Data never leaves Google’s network to hit public internet.

    4. Vertex AI for Regulated Workloads
    We use Vertex AI’s enterprise models (Claude, Gemini) instead of the public APIs. These are deployed to their VPC and their service account. Zero external API calls for language model inference.

    The Data Flow
    Example: A healthcare client wants to analyze patient documents.
    – Client uploads PDF to their Cloud Storage bucket
    – Cloud Function (with restricted service account) triggers
    – Function reads the PDF
    – Function sends to Vertex AI Claude endpoint (inside their VPC)
    – Claude extracts structured data from the document
    – Function writes results back to client’s bucket
    – Everything stays inside the VPC, inside the project, inside the isolation boundary

    The client can audit every API call, every service account action, every network flow. Full compliance visibility.

    Why This Matters for Compliance
    HIPAA: Patient data never leaves the healthcare client’s infrastructure
    PCI-DSS: Payment data stays inside their isolated environment
    GDPR: EU data can be processed in their EU GCP region
    FedRAMP: For government clients, we can build on GCP’s FedRAMP-certified infrastructure

    The Service Account Model
    Service accounts are the key to this. Instead of giving Claude/Vertex AI direct access to client data, we create a bot account that:

    1. Has zero standing permissions
    2. Can only access specific resources (their bucket, their dataset)
    3. Can only run specific operations (Vertex AI API calls)
    4. Permissions are short-lived (can be revoked immediately)
    5. Every action is logged with the service account ID

    So even if Vertex AI were compromised, it couldn’t access other clients’ data. Even if the service account was compromised, it couldn’t do anything except Vertex AI calls on that specific bucket.

    The Cost Trade-off
    – Shared GCP account: ~$300/month for Claude/Vertex AI usage
    – Isolated GCP project per client: ~$400-600/month per client (slightly higher due to overhead)

    That premium ($100-300/month per client) is the cost of compliance. Most regulated clients are willing to pay it.

    What This Enables
    – Healthcare clients can use Claude for chart analysis, clinical note generation, patient data extraction
    – Financial clients can use Claude for document analysis, regulatory reporting, trade summarization
    – Legal clients can use Claude for contract analysis, case law research, document review
    – All without violating data residency, compliance, or isolation requirements

    The Enterprise Advantage
    This is where AI agencies diverge from freelancers. Most freelancers can’t build compliant AI infrastructure. You need GCP expertise, service account management knowledge, and regulatory understanding.

    But regulated verticals are where the money is. A healthcare data extraction project can be worth $50K+. A financial compliance project can be $100K+. The infrastructure investment pays for itself on the first client.

    If you’re only doing public API integrations, you’re leaving regulated verticals entirely on the table. Build the fortress. The clients are waiting.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “Service Account Keys, Vertex AI, and the GCP Fortress”,
    “description”: “For regulated verticals, we build isolated GCP projects with service accounts and restricted Vertex AI access. Here’s the compliance architecture for heal”,
    “datePublished”: “2026-03-30”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/service-account-keys-vertex-ai-and-the-gcp-fortress/”
    }
    }