Author: Will Tygart

  • Content Brief Factory — Brief-to-Publish Workflow for Multi-Site WordPress Operations

    Tygart Media / Content Strategy
    The Practitioner JournalField Notes
    By Will Tygart
    · Practitioner-grade
    · From the workbench

    What Is the Content Brief Factory?
    The Content Brief Factory is a brief-to-publish content workflow — starting from a target keyword and site, it produces a research-backed brief, writes the core article, identifies which audience personas need their own variant, generates those variants with AEO/GEO optimization baked in, and publishes everything directly to WordPress. One brief becomes a content cluster. One session handles what would take a week of manual work.

    Content agencies have a brief problem. Either briefs are too thin (keyword + title, nothing else) and writers guess at the angle, or briefs are so detailed that writing the article takes half as long as writing the brief. Neither scales when you’re managing content across 10 sites and 4 verticals simultaneously.

    We built the Adaptive Variant Pipeline to solve this for our own operation. The brief is structured but lightweight — keyword, site, intent, target persona. The pipeline does the research, writes the core article, then determines which personas genuinely need a different angle (not just a different intro) and generates those variants. Each variant gets AEO/GEO optimization applied before publish.

    Who This Is For

    Content agencies and in-house content teams managing 3+ WordPress sites who need to produce multiple audience-targeted articles from a single research pass without duplicating work or diluting quality.

    What the Pipeline Produces From One Brief

    • Core article — 1,200–2,000 word pillar piece targeting the primary keyword with full SEO/AEO/GEO treatment
    • Persona variants — 2–5 audience-specific rewrites (e.g., homeowner vs. adjuster vs. contractor for restoration content) — only generated where genuine knowledge gap exists, not just reformatted intros
    • AEO layer — Definition box, FAQ section, speakable blocks on all variants
    • Schema — FAQPage + Article JSON-LD on every piece
    • Internal link map — Identified link opportunities to existing posts before publish

    What We Deliver in a Setup Engagement

    Item Included
    Brief template customized to your verticals and sites
    Persona library (2–6 personas per site)
    AEO/GEO optimization checklist applied to pipeline
    WordPress REST API connection for direct publish
    First content cluster (3–5 pieces) executed as proof of concept
    Pipeline documentation + handoff

    Ready to Turn One Brief Into a Content Cluster?

    Tell us how many sites you’re managing, your current brief process, and where the bottleneck is. We’ll show you exactly where the pipeline compresses your workflow.

    will@tygartmedia.com

    Email only. No sales call required.

    Frequently Asked Questions

    How is this different from just using Claude to write articles?

    The pipeline adds structured brief intake, persona library application, adaptive variant logic (not fixed counts — only generates variants where genuine audience divergence exists), AEO/GEO optimization on every output, and direct WordPress publish via REST API. It’s a system, not a prompt.

    Can this be configured for a specific niche or vertical?

    Yes — and it should be. The persona library, brief template, and entity sets are all configured per-vertical during setup. A restoration pipeline looks completely different from a luxury lending pipeline.

    Does the content quality gate run on every piece?

    Yes. Every article passes through a cross-site contamination scan (ensuring no client content leaks between sites) and an unsourced claims scan before publish. Nothing goes live without passing the gate.


    Last updated: April 2026

  • WordPress Schema Injection Sprint — JSON-LD Structured Data for 20 Posts

    Tygart Media / Content Strategy
    The Practitioner JournalField Notes
    By Will Tygart
    · Practitioner-grade
    · From the workbench

    What Is a Schema Injection Sprint?
    A schema injection sprint is a concentrated pass across 20 WordPress posts — identifying the right JSON-LD structured data types for each post, generating valid schema markup, injecting it via WordPress REST API, and validating every post with Google’s Rich Results Test. In one sprint, 20 posts become eligible for rich result placements they weren’t eligible for before.

    Schema markup is one of the highest-leverage, most consistently skipped SEO tasks on WordPress sites. It’s not that operators don’t know it matters — it’s that doing it right on 20 posts manually takes hours, and most schema plugins produce bloated or invalid output that fails the Rich Results Test anyway.

    We inject schema programmatically. Every post gets the right schema type for its content — not a one-size-fits-all Article block — and every result is validated before we move on.

    Who This Is For

    WordPress sites with existing published content that aren’t appearing in rich result placements (FAQ accordions, HowTo steps, review stars) despite having the content to qualify. If your posts have FAQ sections but no FAQPage schema, you’re invisible to the placement Google is actively filling.

    Schema Types We Inject

    • FAQPage — For any post with a Q&A section. Produces FAQ accordion in Google results.
    • Article — Standard news/blog schema with author, publisher, datePublished, dateModified.
    • HowTo — For step-by-step content. Produces visual step display in rich results.
    • Service — For service landing pages. Signals service type, provider, and area served.
    • LocalBusiness — For location-specific content. Reinforces NAP data and service area.
    • BreadcrumbList — Site navigation schema. Applied to all posts in the sprint.
    • Speakable — Marks key paragraphs for voice search and AI synthesis.

    What We Deliver

    Item Included
    Schema type selection for all 20 posts
    JSON-LD generation (valid, not plugin-bloated)
    REST API injection to all 20 posts
    Google Rich Results Test validation on every post
    Validation report with pass/fail per post
    Fix pass for any validation failures

    Ready to Make Your Content Rich-Result Eligible?

    Share your site URL and we’ll identify your 20 best candidates for schema injection based on content type and current ranking proximity.

    will@tygartmedia.com

    Email only. No sales call required.

    Frequently Asked Questions

    Will this conflict with my existing SEO plugin (Yoast, RankMath)?

    We inject schema as a separate JSON-LD block in the post content — it doesn’t touch plugin settings or plugin-generated schema. In most cases, the two coexist cleanly. If there’s duplication, we identify and resolve it during the validation pass.

    How quickly will rich results appear after injection?

    Google typically processes schema changes within 2–4 weeks for established sites. Rich result eligibility appears in Google Search Console after the next crawl cycle.

    Can you do more than 20 posts?

    Yes. We can run additional sprints of 20 posts or scope a full-site schema pass. Contact us with your post count and we’ll quote accordingly.


    Last updated: April 2026

  • WordPress Taxonomy Rebuild — Categories, Tags, and Slug Normalization at Scale

    Tygart Media / Content Strategy
    The Practitioner JournalField Notes
    By Will Tygart
    · Practitioner-grade
    · From the workbench

    What Is a WordPress Taxonomy Rebuild?
    A WordPress taxonomy rebuild is a structured cleanup of your site’s category and tag architecture — eliminating redundant categories, normalizing tag usage, fixing broken slugs, injecting SEO meta descriptions into taxonomy pages, and creating a logical content hierarchy that both users and search engines can navigate. It’s the foundation everything else in a WordPress SEO operation depends on.

    Most WordPress sites that have been publishing for more than a year have the same problem: category bloat. Posts assigned to three overlapping categories. Tags that are slightly different versions of each other (“Water Damage” and “water-damage-restoration” and “WaterDamage”). Taxonomy pages with no descriptions, no schema, and slugs that look like they were typed by different people on different days.

    We’ve fixed this on 18+ sites. The pattern is always the same, and the fix is always the same: audit, design, rebuild, inject, verify.

    Who This Is For

    WordPress site owners with 50+ published posts whose category and tag structure has grown organically (read: randomly) and is now a liability for SEO, user navigation, and content discoverability. Common trigger: you’re trying to do internal linking work and discover your categories are a mess.

    What the Rebuild Covers

    • Taxonomy audit — Full inventory of all categories, tags, post counts, and current slugs. Identification of duplicates, orphans, and bloat.
    • Architecture design — Clean category hierarchy built around your content verticals and search intent clusters. Typically 8–15 primary categories, 3–5 subcategories each where appropriate.
    • Tag normalization — Redundant tags merged, casing standardized, slug format normalized. Target: tags that mean something to a user, not internal filing codes.
    • Slug cleanup — All category and tag slugs rewritten to keyword-rich, stop-word-free format and redirects set.
    • SEO description injection — Two-layer descriptions written for every primary category: 140–160 char meta hook + 400–600w editorial body that search engines can index.
    • Post reassignment — All existing posts reassigned to the new architecture via WordPress REST API. No manual clicking.

    What We Deliver

    Item Included
    Full taxonomy audit report
    New architecture design (categories + tags)
    REST API execution (slug changes, reassignment, descriptions)
    Redirect configuration for old slugs
    SEO descriptions for all primary categories
    Post-rebuild verification report

    Is Your Taxonomy Working Against You?

    Share your site URL and we’ll pull a quick category/tag inventory. If it’s a mess, we’ll tell you exactly what the rebuild involves.

    will@tygartmedia.com

    Email only. No commitment to reply.

    Frequently Asked Questions

    Will changing slugs break my existing links?

    Slug changes trigger 301 redirects from old URLs to new ones. Existing backlinks and bookmarks continue to work. We configure and verify redirects as part of the rebuild.

    How long does a taxonomy rebuild take?

    Audit and design: 2–3 business days. Execution (REST API reassignment and description injection): 1–2 business days. Verification: 1 day. Total: 5–7 business days for most sites.

    Do you touch post content during the taxonomy rebuild?

    No. The rebuild operates only on taxonomy objects and post-to-taxonomy relationships. Post titles, content, and metadata are not modified during this process.


    Last updated: April 2026

  • BigQuery Knowledge Ledger — Persistent AI Memory for Content Operations

    The Lab · Tygart Media
    Experiment Nº 698 · Methodology Notes
    METHODS · OBSERVATIONS · RESULTS

    What Is a BigQuery Knowledge Ledger?
    A BigQuery Knowledge Ledger is a persistent AI memory layer — your content, decisions, SOPs, and operational history stored as vector embeddings in Google BigQuery, queryable in real time. When a Claude session opens, you query the ledger instead of re-pasting context. Your AI starts informed, not blank.

    Every Claude session starts from zero. You re-brief it on your clients, your sites, your decisions, your rules. Then the session ends and it forgets. For casual use, that’s fine. For an operation running 27 WordPress sites, 500+ published articles, and dozens of active decisions — that reset is an expensive tax on every session.

    The BigQuery Knowledge Ledger is the solution we built for ourselves. It stores operational knowledge as vector embeddings — 925 content chunks across 8 tables in our production ledger — and makes it queryable from any Claude session. The AI doesn’t start blank. It starts with history.

    Who This Is For

    Agency operators, publishers, and AI-native teams running multi-site content operations where the cost of re-briefing AI across sessions is measurable. If you’ve ever said “as I mentioned before” to Claude, you need this.

    What We Build

    • BigQuery datasetoperations_ledger schema with 8 tables: knowledge pages, embedded chunks, session history, client records, decision log, content index, site registry, and change log
    • Embedding pipeline — Vertex AI text-embedding-005 model processes your existing content (Notion pages, SOPs, articles) into vector chunks stored in BigQuery
    • Query interface — Simple Python function (or Cloud Run endpoint) that accepts a natural language query and returns the most relevant chunks for context injection
    • Claude integration guide — How to query the ledger at session start and inject results into your Claude context window
    • Initial seed — We process your existing Notion pages, key SOPs, and site documentation into the ledger on setup

    What We Deliver

    Item Included
    BigQuery dataset + 8-table schema deployed to your GCP project
    Vertex AI embedding pipeline (text-embedding-005)
    Query function (Python + optional Cloud Run endpoint)
    Initial content seed (up to 100 Notion pages or documents)
    Claude session integration guide
    Ongoing ingestion script (add new content to ledger)
    Technical walkthrough + handoff documentation

    Stop Re-Briefing Your AI Every Session

    Tell us how many sites, documents, or SOPs you’re managing and what your current re-briefing tax looks like. We’ll scope the ledger build.

    will@tygartmedia.com

    Email only. No sales call required.

    Frequently Asked Questions

    Does this require Google Cloud?

    Yes. BigQuery and Vertex AI are Google Cloud services. You need a GCP project with billing enabled. We handle all setup and deployment.

    What’s the ongoing cost in GCP?

    BigQuery storage for a 1,000-chunk ledger costs less than $1/month. Embedding runs (adding new content) cost fractions of a cent per chunk via Vertex AI. Query costs are negligible at typical session volumes.

    Can this work with tools other than Claude?

    Yes. The ledger is model-agnostic — it returns text chunks that can be injected into any LLM context. ChatGPT, Gemini, and Perplexity integrations all work with the same query interface.

    What format does my existing content need to be in?

    Notion pages (via API), plain text, markdown, or Google Docs. We handle the conversion and chunking during initial seed. PDFs and Word docs require an additional preprocessing step.

    Last updated: April 2026

  • Restoration Golf League Setup — B2B Networking Through Golf for Trade Industries

    Tygart Media / Content Strategy
    The Practitioner JournalField Notes
    By Will Tygart
    · Practitioner-grade
    · From the workbench

    What Is a B2B Golf League for Trade Industries?
    A B2B golf league is a structured networking vehicle — not a scramble, not a charity event — designed to put contractors, adjusters, property managers, vendors, and referral partners on the same course repeatedly throughout a season. The relationship is the product. Golf is the excuse. The deals happen in the cart.

    Cold outreach in the restoration industry has a near-zero response rate. Trade shows are expensive and transactional. Referral relationships — the ones that produce consistent work — are built over time, in informal settings, with people who have chosen to spend 4 hours with you.

    The Restoration Golf League (RGL) is a restoration industry golf network active in the Pacific Northwest — one we sponsor and participate in as a B2B networking vehicle. It was built to solve a specific problem: how does a small restoration operator build relationships with adjusters, property managers, and general contractors without a sales team or a trade show budget? The answer turned out to be a golf league format that runs April through October.

    We’ve now documented the model so other trade operators can replicate it in their market.

    Who This Is For

    Restoration company owners, plumbing and HVAC operators, roofing contractors, and commercial flooring companies who sell primarily through relationships and want a repeatable, low-cost way to build and maintain those relationships in their local market. Also works for vendors and suppliers who want ongoing access to contractors.

    What the League Setup Includes

    • Format design — Scoring format, flight structure, handicap system, and round length optimized for business networking (not competitive golf)
    • Player acquisition strategy — Outreach templates, target list structure, LinkedIn and direct outreach playbook for filling the first season
    • Sponsor structure — Hole sponsorship, season sponsorship, and in-kind trade frameworks so the league pays for itself
    • Communication system — Email sequence, text reminder cadence, and post-round follow-up templates
    • Scoring and leaderboard — Simple tracking system that keeps players engaged between rounds
    • Season calendar — 6-round template with tee time blocks, course negotiation guidance, and rain date logic
    • The playbook — Full written documentation of the RGL model adapted to your market and vertical

    What We Deliver

    Item Included
    Custom league format document for your vertical and market
    Player acquisition outreach templates (LinkedIn + direct)
    Sponsor package deck (customizable)
    Season communication sequence (email + text)
    Scoring tracker (Google Sheets)
    Course negotiation talking points
    90-minute strategy call with Will (RGL sponsor and participant)
    30-day async support through first round

    Ready to Build the Relationship Network Your Competitors Don’t Have?

    Tell us your trade vertical, your market (city/region), and roughly how many relationships you’re trying to build. We’ll tell you if the league model fits.

    will@tygartmedia.com

    Email only. No commitment to reply.

    Frequently Asked Questions

    Does this only work for restoration companies?

    No. The RGL model was built for restoration but the format works for any trade industry where relationship-based selling drives revenue — roofing, plumbing, HVAC, flooring, commercial cleaning, and specialty contractors all fit the model.

    How many players do you need to run a league?

    A minimum viable league runs with 16 players (4 foursomes). The sweet spot is 24–32 players, which gives you enough variation across rounds that players meet new people each time.

    What does it cost to run the league after setup?

    Highly variable by market and course. The RGL model targets sponsor coverage of all hard costs — green fees, cart fees, and prizes — so the operator’s only expense is time. Most leagues break even or generate modest surplus by season two.

    Do I need to be a good golfer to run this?

    No. The format is designed for mixed skill levels. The operator’s job is logistics and relationship cultivation, not competitive golf. A handicap isn’t required — a willingness to spend time with people is.

    Last updated: April 2026

  • AI Social Content Engine — Automated Social Media From Existing Content

    What Is an AI Social Content Engine?
    An AI Social Content Engine is a connected pipeline that takes your existing WordPress articles and raw ideas, converts them into platform-native social posts (LinkedIn, Facebook, Google Business Profile), generates matching visuals via Canva, and schedules everything through Metricool — automatically. One source, five distribution channels, zero social media manager.

    Most business owners know they should be posting consistently. Most aren’t. Not because they lack content — they’re sitting on dozens of published articles — but because reformatting a blog post into a LinkedIn carousel and a Facebook caption and a GBP update takes time they don’t have.

    We solved this for our own operation first. The pipeline reads a WordPress article, extracts the core argument, writes platform-specific posts for each channel in the right voice, queues visuals in Canva, and schedules everything in Metricool. One session produces a week of social content.

    Who This Is For

    Service businesses, agencies, and operators who are publishing content on WordPress but not distributing it socially at anything close to the rate they’re producing it. If you have a blog that nobody’s amplifying, this closes that gap without adding headcount.

    What the Pipeline Does

    • WordPress article intake — Reads published posts via REST API, extracts key arguments, data points, and quotable moments
    • Platform voice adaptation — Rewrites for each channel: LinkedIn (professional/insightful), Facebook (human/local), GBP (service-focused/local SEO)
    • Canva visual generation — Branded image templates populated with post-specific text via Canva API
    • Metricool scheduling — Posts queued to your Metricool planner with optimal timing per platform
    • Intake ritual for raw ideas — You share a thought, a voice note, or a link — the engine packages it into posts before you forget it

    What We Deliver

    Item Included
    Metricool account connection and blog configuration
    Platform voice profiles (LinkedIn, Facebook, GBP)
    Claude API prompt library for each platform
    Canva template set (3 branded layouts)
    WordPress → social intake workflow documentation
    First content sprint (10 posts across platforms from your existing articles)
    30-day async support

    Stop Leaving Published Content Undistributed

    Tell us which platforms matter most and roughly how many WordPress posts you’re sitting on. We’ll scope the engine build.

    will@tygartmedia.com

    Email only. No sales call required.

    Frequently Asked Questions

    Does this require a Metricool paid plan?

    Metricool’s free plan supports limited scheduling. The engine works best on their Starter plan or above, which supports unlimited scheduled posts and GBP integration. We configure the connection regardless of plan tier.

    Do I need a Canva for Teams account?

    Canva Pro or Teams is required for API access and branded template management. Canva Free does not support the API integration.

    Can this work with my personal brand, not just a business?

    Yes. We’ve built this for personal brand publishing — the voice profiles are adapted to individual tone, not just company voice. LinkedIn personal profiles are supported in Metricool.

    How many posts per week does the engine produce?

    That’s a dial you control. The engine can produce 1–5 posts per platform per week depending on your content input volume and scheduling preferences.

    Last updated: April 2026

  • WordPress AEO/GEO Sprint — Featured Snippets and AI Citation Optimization

    Tygart Media // AEO & AI Search
    SCANNING
    CH 03
    · Answer Engine Intelligence
    · Filed by Will Tygart

    What Is an AEO/GEO Sprint?
    An AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization) Sprint is a structured retrofit of your existing WordPress content — restructuring posts so search engines surface them as direct answers, and AI systems cite them in generated responses. Not new content. Not a redesign. Your existing posts, optimized to win in a search landscape that now includes ChatGPT, Perplexity, and Google AI Overviews.

    Google’s search results page looks different than it did 18 months ago. AI Overviews now appear above the organic results. Perplexity cites specific pages instead of ranking a list. ChatGPT recommends sites it’s been trained to recognize as authoritative.

    If your existing content wasn’t built to answer questions directly, it won’t show up in any of those placements — regardless of how well it ranks for traditional SEO.

    We’ve applied this exact retrofit to over 500 posts across restoration, lending, flooring, SaaS, healthcare, and entertainment verticals. We know what changes produce featured snippet captures, what entity patterns make AI systems cite a page, and which schema structures Google’s rich results tool actually validates.

    Who This Is For

    WordPress site owners and operators with existing published content — at least 20 posts — who aren’t appearing in AI-generated answers or featured snippet placements. If you’ve been publishing consistently but not converting that content into search placements that existed 18 months ago, this sprint directly addresses that gap.

    What the Sprint Covers (Per Post)

    • Definition box insertion — 40–60 word direct answer block at the top of the post, formatted for featured snippet capture
    • Question-led H2 restructure — Key headings rewritten as questions with direct answers in the first 50 words following each heading
    • FAQPage section — 5–8 Q&As written for People Also Ask placement, with FAQPage JSON-LD schema
    • Speakable schema blocks — Key paragraphs marked with speakable schema for voice search and AI synthesis
    • Entity saturation pass — Named entities (organizations, certifications, standards bodies, locations) identified and injected throughout
    • External citation injection — 3–5 authoritative source references added per post
    • Article + BreadcrumbList schema — Complete JSON-LD block appended to each post
    • LLMS.TXT comment block — AI-readable seed paragraph added as HTML comment for LLM citation signals

    Sprint Packages

    Package Posts Covered Turnaround
    Starter Sprint 10 posts 5 business days
    Standard Sprint 25 posts 10 business days
    Full Site Sprint 50 posts 15 business days

    Posts are selected collaboratively — we prioritize by traffic volume, keyword proximity to featured snippet triggers, and entity coverage gaps.

    What You Get vs. DIY vs. Generic SEO Agency

    Tygart Media Sprint DIY Generic SEO Agency
    FAQPage JSON-LD schema on every post Maybe Sometimes
    AI citation signals (LLMS.TXT, speakable)
    Entity saturation for niche-specific bodies Rarely
    Direct publish to WordPress via REST API N/A You review drafts
    Validated with Google Rich Results Test Maybe Sometimes
    Proven in AI-heavy verticals

    Ready to Get Your Existing Content Into AI-Generated Answers?

    Send your site URL and a rough post count. We’ll identify your best 10 candidates for AEO/GEO retrofit and quote the sprint that makes sense.

    will@tygartmedia.com

    Email only. No sales call required. No commitment to reply.

    Frequently Asked Questions

    Will this change my existing post content significantly?

    We add structured elements (definition boxes, FAQ sections, schema) and restructure key headings — we don’t rewrite the body of your posts. Your voice and factual content remain intact. All changes are reviewed before publish if requested.

    How quickly will I see results in featured snippets or AI answers?

    Google typically re-crawls optimized pages within 2–6 weeks for established sites. Featured snippet captures often appear within the first crawl cycle post-optimization. AI citation signals (Perplexity, ChatGPT) are slower — typically 1–3 months for recognition.

    Which verticals have you run this in?

    Property damage restoration, luxury asset lending, commercial flooring, B2B SaaS, healthcare services, comedy and entertainment streaming, and event technology. The entity patterns differ by vertical — we adapt the sprint to the specific certification bodies, standards organizations, and named entities that matter in your niche.

    Do I need to give you WordPress admin access?

    We use WordPress Application Passwords — a scoped credential that doesn’t expose your admin password. You create it, share it, and revoke it after the sprint. We publish directly via WordPress REST API.

    What if my site uses Elementor or another page builder on posts?

    We specifically target WordPress posts (not pages) via the REST API content field — Elementor and page builder data on pages is never touched. This is a hard operational rule we enforce on every sprint.

    Can I pick which posts get the sprint treatment?

    Yes. We provide a prioritized recommendation list, but you make the final call on which posts are included.

    Last updated: April 2026

  • GCP Content Pipeline Setup for AI-Native WordPress Publishers

    What Is a GCP Content Pipeline?
    A GCP Content Pipeline is a Google Cloud-hosted infrastructure stack that connects Claude AI to your WordPress sites — bypassing rate limits, WAF blocks, and IP restrictions — and automates content publishing, image generation, and knowledge storage at scale. It’s the back-end that lets a one-person operation run like a 10-person content team.

    Most content agencies are running Claude in a browser tab and copy-pasting into WordPress. That works until you’re managing 5 sites, 20 posts a week, and a client who needs 200 articles in 30 days.

    We run 122+ Cloud Run services across a single GCP project. WordPress REST API calls route through a proxy that handles authentication, IP allowlisting, and retry logic automatically. Imagen 4 generates featured images with IPTC metadata injected before upload. A BigQuery knowledge ledger stores 925 embedded content chunks for persistent AI memory across sessions.

    We’ve now productized this infrastructure so you can skip the 18 months it took us to build it.

    Who This Is For

    Content agencies, SEO publishers, and AI-native operators running multiple WordPress sites who need content velocity that exceeds what a human-in-the-loop browser session can deliver. If you’re publishing fewer than 20 posts a week across fewer than 3 sites, you probably don’t need this yet. If you’re above that threshold and still doing it manually — you’re leaving serious capacity on the table.

    What We Build

    • WP Proxy (Cloud Run) — Single authenticated gateway to all your WordPress sites. Handles Basic auth, app passwords, WAF bypass, and retry logic. One endpoint to rule all sites.
    • Claude AI Publisher — Cloud Run service that accepts article briefs, calls Claude API, optimizes for SEO/AEO/GEO, and publishes directly to WordPress REST API. Fully automated brief-to-publish.
    • Imagen 4 Proxy — GCP Vertex AI image generation endpoint. Accepts prompts, returns WebP images with IPTC/XMP metadata injected, uploads to WordPress media library. Four-tier quality routing: Fast → Standard → Ultra → Flagship.
    • BigQuery Knowledge Ledger — Persistent AI memory layer. Content chunks embedded via Vertex AI text-embedding-005, stored in BigQuery, queryable across sessions. Ends the “start from scratch” problem every time a new Claude session opens.
    • Batch API Router — Routes non-time-sensitive jobs (taxonomy, schema, meta cleanup) to Anthropic Batch API at 50% cost. Routes real-time jobs to standard API. Automatic tier selection.

    What You Get vs. DIY vs. n8n/Zapier

    Tygart Media GCP Build DIY from scratch No-code automation (n8n/Zapier)
    WordPress WAF bypass built in You figure it out
    Imagen 4 image generation
    BigQuery persistent AI memory
    Anthropic Batch API cost routing
    Claude model tier routing
    Proven at 20+ posts/day Unknown

    What We Deliver

    Item Included
    WP Proxy Cloud Run service deployed to your GCP project
    Claude AI Publisher Cloud Run service
    Imagen 4 proxy with IPTC injection
    BigQuery knowledge ledger (schema + initial seed)
    Batch API routing logic
    Model tier routing configuration (Haiku/Sonnet/Opus)
    Site credential registry for all your WordPress sites
    Technical walkthrough + handoff documentation
    30-day async support

    Prerequisites

    You need: a Google Cloud account (we can help set one up), at least one WordPress site with REST API enabled, and an Anthropic API key. Vertex AI access (for Imagen 4) requires a brief GCP onboarding — we walk you through it.

    Ready to Stop Copy-Pasting Into WordPress?

    Tell us how many sites you’re managing, your current publishing volume, and where the friction is. We’ll tell you exactly which services to build first.

    will@tygartmedia.com

    Email only. No sales call required. No commitment to reply.

    Frequently Asked Questions

    Do I need to know how to use Google Cloud?

    No. We build and deploy everything. You’ll need a GCP account and billing enabled — we handle the rest and document every service so you can maintain it independently.

    How is this different from using Claude directly in a browser?

    Browser sessions have no memory, no automation, no direct WordPress integration, and no cost optimization. This infrastructure runs asynchronously, publishes directly to WordPress via REST API, stores content history in BigQuery, and routes jobs to the cheapest model tier that can handle the task.

    Which WordPress hosting providers does the proxy support?

    We’ve tested and configured routing for WP Engine, Flywheel, SiteGround, Cloudflare-protected sites, Apache/ModSecurity servers, and GCP Compute Engine. Most hosting environments work out of the box — a handful need custom WAF bypass headers, which we configure per-site.

    What does the BigQuery knowledge ledger actually do?

    It stores content chunks (articles, SOPs, client notes, research) as vector embeddings. When you start a new AI session, you query the ledger instead of re-pasting context. Your AI assistant starts with history, not a blank slate.

    What’s the ongoing GCP cost?

    Highly variable by volume. For a 10-site agency publishing 50 posts/week with image generation, expect $50–$200/month in GCP costs. Cloud Run scales to zero when idle, so you’re not paying for downtime.

    Can this be expanded after initial setup?

    Yes — the architecture is modular. Each Cloud Run service is independent. We can add newsroom services, variant engines, social publishing pipelines, or site-specific publishers on top of the core stack.

    Last updated: April 2026

  • Notion Second Brain Setup for Agency Owners and AI-Native Operators

    What Is a Notion Second Brain Setup?
    A Notion Second Brain is a structured personal knowledge operating system — not a template dump, but a living architecture that captures decisions, organizes projects, tracks clients, and gives you (and your AI) persistent operational context. Built right, it becomes the intelligence layer between your brain and your tools.

    Most Notion setups look impressive for three weeks and collapse by month two. The problem isn’t Notion — it’s that generic templates aren’t built around how you actually work.

    We built our own from scratch. It runs a multi-client agency, integrates directly with Claude AI, maintains operational memory across sessions, and has been stress-tested across content operations at scale. We’ve now productized it so you don’t have to rebuild what we already broke and fixed.

    Who This Is For

    Agency owners, fractional executives, solo operators, and founders who are drowning in browser tabs, scattered notes, and tools that don’t talk to each other. If you’re running more than 3 clients or 5 active projects and your “system” is a mix of sticky notes, Slack threads, and half-finished Notion pages — this is for you.

    What the 6-Database Command Center Architecture Delivers

    • Command Center Hub — One master dashboard linking every active project, client, and initiative with live status
    • Client & Project Database — Structured client records, deliverable tracking, and project timelines in one view
    • Content Pipeline — Brief-to-publish workflow with status stages, site assignment, and AI output staging
    • Knowledge Lab — Permanent storage for research, SOPs, skill documentation, and reference material
    • Operations Ledger — Decision log, session history, and change records so nothing gets lost
    • Task Triage Board — Priority-ranked action queue pulling from every database in the system

    The claude_delta Standard (What Makes This Different)

    Every page in this system includes a claude_delta v1.0 metadata block — a structured JSON header that gives Claude AI immediate operational context when you paste a page into a session. No re-explaining. No re-briefing. Claude reads the block and knows what it’s looking at.

    This is not something you’ll find in an Etsy template. It’s the result of running a real AI-native agency operation and discovering what actually breaks when your context window expires.

    What We Deliver

    Item Included
    Full 6-database architecture setup in your Notion workspace
    claude_delta metadata standard applied to all key pages
    Claude AI integration guide (how to use your Second Brain in sessions)
    3 custom views per database (board, table, calendar)
    SOP templates for your top 5 recurring workflows
    1-hour architecture walkthrough call
    30-day async support for questions and adjustments

    What You Get vs. DIY vs. Generic Agency

    Tygart Media Setup DIY (YouTube tutorials) Generic Notion Consultant
    Built around AI-native workflows
    claude_delta AI context standard
    Multi-client agency architecture Sometimes
    Ongoing async support Extra cost
    Proven under real operational load Unknown Unknown

    Ready to Stop Rebuilding Your System Every 90 Days?

    Send a note describing your current setup (or lack of one) and what you’re trying to manage. We’ll tell you if this is the right fit.

    will@tygartmedia.com

    Email only. No sales call required. No commitment to reply.

    Frequently Asked Questions

    Do I need to already use Notion?

    You need a Notion account (free works for setup, Team plan recommended for ongoing use). No prior Notion experience required — we build it around your workflows, not the other way around.

    How long does setup take?

    The architecture is built within 5 business days. The walkthrough call is scheduled in week two. Adjustments and SOP templates are completed within 30 days.

    What if I already have a Notion setup I’ve been using?

    We can audit your existing structure and either retrofit the 6-database architecture into it or rebuild cleanly. We’ll recommend one or the other after reviewing your current setup.

    Is this just a template I download?

    No. This is a custom build in your workspace. We configure databases, relations, views, formulas, and the claude_delta metadata standard to match your actual operation — clients, projects, workflows, and all.

    What industries is this built for?

    Originally built for a content and SEO agency. The architecture works for any service business running multiple clients, projects, or revenue streams simultaneously. Consultants, fractional CMOs, boutique agencies, and solo operators with complex operations are the best fit.

    Does this work with Claude, ChatGPT, or other AI tools?

    The claude_delta standard was designed for Claude. The architecture works with any AI tool — the metadata blocks and structured content make any LLM more effective when you paste pages into sessions. Claude integration is deepest out of the box.

    Last updated: April 2026