Tag: Gcp

  • Fractional AI Content Infrastructure — Build the Machine, Not Just the Content

    What Is Fractional AI Content Infrastructure?
    Fractional AI Content Infrastructure is a consulting engagement where Will Tygart comes in — for a defined period, at a fraction of the cost of a full-time hire — and builds the complete AI-native content operation your business needs: GCP pipelines, WordPress automation, Claude AI orchestration, Notion operating system, BigQuery memory layer, image generation, and social distribution. He builds the machine. You run it.

    Most businesses hiring for “AI content” are looking for a writer who uses ChatGPT. That’s not this. This is for the operator who has looked at what AI-native content infrastructure actually requires — Claude API, Cloud Run services, WordPress REST API, vector embeddings, image generation pipelines, persistent memory layers — and realized they need someone who has already built all of it, not someone who will figure it out on their dime.

    We run 27+ WordPress client sites, 122+ GCP Cloud Run services, and a content operation that produces hundreds of optimized posts per month across multiple verticals. That infrastructure didn’t come from a playbook — it came from building, breaking, and rebuilding. The fractional engagement transfers that operational knowledge into your business in weeks, not years.

    Who This Is For

    Agencies scaling past what manual workflows can handle. Publishers who need content velocity they can’t hire for. B2B companies that have decided AI content infrastructure is a competitive advantage and want it built right the first time. If you’re spending more than $5,000/month on content production and still doing it mostly manually — this conversation is worth having.

    What Gets Built

    • GCP content pipeline — Cloud Run publisher, WordPress proxy, Imagen 4 image generation, Batch API routing — the full automated brief-to-publish stack
    • Claude AI orchestration — Model tier routing (Haiku/Sonnet/Opus), prompt libraries per content type, quality gate implementation, cross-site contamination prevention
    • Notion Second Brain OS — 6-database Command Center architecture, claude_delta metadata standard, AI session context infrastructure
    • BigQuery knowledge ledger — Persistent AI memory layer, Vertex AI embeddings, session-to-session context continuity
    • WordPress multi-site operations — Site registry, credential management, taxonomy architecture, SEO/AEO/GEO optimization pipeline across all sites
    • Social distribution layer — Metricool + Canva + Claude pipeline, platform-native voice profiles, scheduled distribution from WordPress content
    • Skills library — Documented, repeatable skill files for every operation — so the system runs without Will after the engagement ends

    Engagement Models

    Model What It Is Right For
    Infrastructure Sprint 30-day focused build — one stack, fully deployed, handed off with documentation Agencies needing a specific pipeline built fast
    Fractional Quarter 90-day engagement — full stack built, team trained, operations running Publishers and B2B companies standing up a full AI content operation
    Strategic Advisory Ongoing async advisory — architecture review, pipeline troubleshooting, new capability design Teams that have the technical staff but need senior AI content ops judgment

    What You Get vs. a Full-Time Hire vs. an AI Agency

    Fractional AI Infrastructure Full-Time AI Hire AI Content Agency
    Proven at scale before engagement starts Unknown Rarely
    GCP + Claude + WordPress stack expertise Rare combination
    Builds infrastructure you own ❌ (you rent theirs)
    Documented skills library handed off Maybe
    Cost vs. full-time senior hire Fraction $150k+/yr Retainer + markup
    Available without 6-month commitment Usually no

    Ready to Build the Machine?

    Describe what you’re trying to build or what’s breaking in what you already have. Will will tell you honestly whether a fractional engagement is the right fit — and if it’s not, which of the productized services is.

    Email Will

    Email only. Honest scoping conversation, not a sales pitch.

    Frequently Asked Questions

    What’s the minimum engagement size?

    The Infrastructure Sprint is the minimum — a 30-day focused build on one specific pipeline or stack component. Smaller individual needs are better served by the productized services (GCP Content Pipeline Setup, Notion Second Brain Setup, etc.) which have fixed scopes and prices.

    Do you work with teams or just solo operators?

    Both. Solo operators get a full stack built around their workflows. Teams get infrastructure built plus documentation and handoff training so internal staff can operate and extend it independently after the engagement.

    What does the skills library handoff actually include?

    Every repeatable operation gets a documented skill file — a structured prompt and workflow document that tells Claude (or any AI) exactly how to execute the operation correctly. At the end of the engagement, you have a library of skills covering every pipeline we built together. The operation runs without Will because the intelligence is in the skills, not in his head.

    Is this available for businesses outside the content and SEO space?

    The infrastructure patterns — GCP pipelines, Claude AI orchestration, Notion OS, BigQuery memory — apply to any knowledge-intensive business producing content at volume. The vertical expertise (restoration, luxury lending, healthcare, SaaS) is a bonus for clients in those niches, not a requirement for everyone else.

    Last updated: April 2026

  • BigQuery Knowledge Ledger — Persistent AI Memory for Content Operations

    What Is a BigQuery Knowledge Ledger?
    A BigQuery Knowledge Ledger is a persistent AI memory layer — your content, decisions, SOPs, and operational history stored as vector embeddings in Google BigQuery, queryable in real time. When a Claude session opens, you query the ledger instead of re-pasting context. Your AI starts informed, not blank.

    Every Claude session starts from zero. You re-brief it on your clients, your sites, your decisions, your rules. Then the session ends and it forgets. For casual use, that’s fine. For an operation running 27 WordPress sites, 500+ published articles, and dozens of active decisions — that reset is an expensive tax on every session.

    The BigQuery Knowledge Ledger is the solution we built for ourselves. It stores operational knowledge as vector embeddings — 925 content chunks across 8 tables in our production ledger — and makes it queryable from any Claude session. The AI doesn’t start blank. It starts with history.

    Who This Is For

    Agency operators, publishers, and AI-native teams running multi-site content operations where the cost of re-briefing AI across sessions is measurable. If you’ve ever said “as I mentioned before” to Claude, you need this.

    What We Build

    • BigQuery datasetoperations_ledger schema with 8 tables: knowledge pages, embedded chunks, session history, client records, decision log, content index, site registry, and change log
    • Embedding pipeline — Vertex AI text-embedding-005 model processes your existing content (Notion pages, SOPs, articles) into vector chunks stored in BigQuery
    • Query interface — Simple Python function (or Cloud Run endpoint) that accepts a natural language query and returns the most relevant chunks for context injection
    • Claude integration guide — How to query the ledger at session start and inject results into your Claude context window
    • Initial seed — We process your existing Notion pages, key SOPs, and site documentation into the ledger on setup

    What We Deliver

    Item Included
    BigQuery dataset + 8-table schema deployed to your GCP project
    Vertex AI embedding pipeline (text-embedding-005)
    Query function (Python + optional Cloud Run endpoint)
    Initial content seed (up to 100 Notion pages or documents)
    Claude session integration guide
    Ongoing ingestion script (add new content to ledger)
    Technical walkthrough + handoff documentation

    Stop Re-Briefing Your AI Every Session

    Tell us how many sites, documents, or SOPs you’re managing and what your current re-briefing tax looks like. We’ll scope the ledger build.

    will@tygartmedia.com

    Email only. No sales call required.

    Frequently Asked Questions

    Does this require Google Cloud?

    Yes. BigQuery and Vertex AI are Google Cloud services. You need a GCP project with billing enabled. We handle all setup and deployment.

    What’s the ongoing cost in GCP?

    BigQuery storage for a 1,000-chunk ledger costs less than $1/month. Embedding runs (adding new content) cost fractions of a cent per chunk via Vertex AI. Query costs are negligible at typical session volumes.

    Can this work with tools other than Claude?

    Yes. The ledger is model-agnostic — it returns text chunks that can be injected into any LLM context. ChatGPT, Gemini, and Perplexity integrations all work with the same query interface.

    What format does my existing content need to be in?

    Notion pages (via API), plain text, markdown, or Google Docs. We handle the conversion and chunking during initial seed. PDFs and Word docs require an additional preprocessing step.

    Last updated: April 2026

  • GCP Content Pipeline Setup for AI-Native WordPress Publishers

    What Is a GCP Content Pipeline?
    A GCP Content Pipeline is a Google Cloud-hosted infrastructure stack that connects Claude AI to your WordPress sites — bypassing rate limits, WAF blocks, and IP restrictions — and automates content publishing, image generation, and knowledge storage at scale. It’s the back-end that lets a one-person operation run like a 10-person content team.

    Most content agencies are running Claude in a browser tab and copy-pasting into WordPress. That works until you’re managing 5 sites, 20 posts a week, and a client who needs 200 articles in 30 days.

    We run 122+ Cloud Run services across a single GCP project. WordPress REST API calls route through a proxy that handles authentication, IP allowlisting, and retry logic automatically. Imagen 4 generates featured images with IPTC metadata injected before upload. A BigQuery knowledge ledger stores 925 embedded content chunks for persistent AI memory across sessions.

    We’ve now productized this infrastructure so you can skip the 18 months it took us to build it.

    Who This Is For

    Content agencies, SEO publishers, and AI-native operators running multiple WordPress sites who need content velocity that exceeds what a human-in-the-loop browser session can deliver. If you’re publishing fewer than 20 posts a week across fewer than 3 sites, you probably don’t need this yet. If you’re above that threshold and still doing it manually — you’re leaving serious capacity on the table.

    What We Build

    • WP Proxy (Cloud Run) — Single authenticated gateway to all your WordPress sites. Handles Basic auth, app passwords, WAF bypass, and retry logic. One endpoint to rule all sites.
    • Claude AI Publisher — Cloud Run service that accepts article briefs, calls Claude API, optimizes for SEO/AEO/GEO, and publishes directly to WordPress REST API. Fully automated brief-to-publish.
    • Imagen 4 Proxy — GCP Vertex AI image generation endpoint. Accepts prompts, returns WebP images with IPTC/XMP metadata injected, uploads to WordPress media library. Four-tier quality routing: Fast → Standard → Ultra → Flagship.
    • BigQuery Knowledge Ledger — Persistent AI memory layer. Content chunks embedded via Vertex AI text-embedding-005, stored in BigQuery, queryable across sessions. Ends the “start from scratch” problem every time a new Claude session opens.
    • Batch API Router — Routes non-time-sensitive jobs (taxonomy, schema, meta cleanup) to Anthropic Batch API at 50% cost. Routes real-time jobs to standard API. Automatic tier selection.

    What You Get vs. DIY vs. n8n/Zapier

    Tygart Media GCP Build DIY from scratch No-code automation (n8n/Zapier)
    WordPress WAF bypass built in You figure it out
    Imagen 4 image generation
    BigQuery persistent AI memory
    Anthropic Batch API cost routing
    Claude model tier routing
    Proven at 20+ posts/day Unknown

    What We Deliver

    Item Included
    WP Proxy Cloud Run service deployed to your GCP project
    Claude AI Publisher Cloud Run service
    Imagen 4 proxy with IPTC injection
    BigQuery knowledge ledger (schema + initial seed)
    Batch API routing logic
    Model tier routing configuration (Haiku/Sonnet/Opus)
    Site credential registry for all your WordPress sites
    Technical walkthrough + handoff documentation
    30-day async support

    Prerequisites

    You need: a Google Cloud account (we can help set one up), at least one WordPress site with REST API enabled, and an Anthropic API key. Vertex AI access (for Imagen 4) requires a brief GCP onboarding — we walk you through it.

    Ready to Stop Copy-Pasting Into WordPress?

    Tell us how many sites you’re managing, your current publishing volume, and where the friction is. We’ll tell you exactly which services to build first.

    will@tygartmedia.com

    Email only. No sales call required. No commitment to reply.

    Frequently Asked Questions

    Do I need to know how to use Google Cloud?

    No. We build and deploy everything. You’ll need a GCP account and billing enabled — we handle the rest and document every service so you can maintain it independently.

    How is this different from using Claude directly in a browser?

    Browser sessions have no memory, no automation, no direct WordPress integration, and no cost optimization. This infrastructure runs asynchronously, publishes directly to WordPress via REST API, stores content history in BigQuery, and routes jobs to the cheapest model tier that can handle the task.

    Which WordPress hosting providers does the proxy support?

    We’ve tested and configured routing for WP Engine, Flywheel, SiteGround, Cloudflare-protected sites, Apache/ModSecurity servers, and GCP Compute Engine. Most hosting environments work out of the box — a handful need custom WAF bypass headers, which we configure per-site.

    What does the BigQuery knowledge ledger actually do?

    It stores content chunks (articles, SOPs, client notes, research) as vector embeddings. When you start a new AI session, you query the ledger instead of re-pasting context. Your AI assistant starts with history, not a blank slate.

    What’s the ongoing GCP cost?

    Highly variable by volume. For a 10-site agency publishing 50 posts/week with image generation, expect $50–$200/month in GCP costs. Cloud Run scales to zero when idle, so you’re not paying for downtime.

    Can this be expanded after initial setup?

    Yes — the architecture is modular. Each Cloud Run service is independent. We can add newsroom services, variant engines, social publishing pipelines, or site-specific publishers on top of the core stack.

    Last updated: April 2026