Author: will_tygart

  • The Data Layer Most SEO Consultants Dont Touch and Why Your Clients Need Someone Who Does — Visual

    The Data Layer Most SEO Consultants Dont Touch and Why Your Clients Need Someone Who Does — Visual

  • What Search Means Now: A Practical Guide for Freelance SEO Consultants Navigating the AI Shift — Visual

    What Search Means Now: A Practical Guide for Freelance SEO Consultants Navigating the AI Shift — Visual

  • The Middleware Manifesto: Why the Best Search Operations Are Built in Layers, Not Silos — Visual

    The Middleware Manifesto: Why the Best Search Operations Are Built in Layers, Not Silos — Visual

  • We Tested Google Flow for Brand Asset Production — Visual

    We Tested Google Flow for Brand Asset Production — Visual

  • The SaaS Illusion Is Cracking: Why Custom Apps Now Cost Less Than Your Software Stack — Visual

    The SaaS Illusion Is Cracking: Why Custom Apps Now Cost Less Than Your Software Stack — Visual

  • The Loop Has to Go Both Ways — Visual

    The Loop Has to Go Both Ways — Visual

  • Split Brain Architecture: How One Person Manages 27 WordPress Sites Without an Agency — Visual

    Split Brain Architecture: How One Person Manages 27 WordPress Sites Without an Agency — Visual

  • Stop Building Inventory. Build the Machine.

    Stop Building Inventory. Build the Machine.

    Just-in-time knowledge manufacturing is an operational model where content, services, and deliverables are assembled on demand from a growing base of raw capabilities — knowledge systems, API connections, AI pipelines, and structured data — rather than pre-built and warehoused. Nothing sits on a shelf. Everything is fabricated at the moment of need.

    There’s a version of running an agency where you spend your weekends batch-producing blog posts, pre-writing email sequences, and stockpiling social content in a spreadsheet. You build the inventory, shelve it, and pray it’s still relevant when you finally schedule it out three weeks later.

    I spent years in that model. It doesn’t scale. It doesn’t adapt. And the moment a client’s market shifts or a Google update lands, half your shelf is stale.

    What I’ve been building instead — quietly, over the last year — is something different. Not a content warehouse. A content machine. One where nothing is pre-built, but everything can be built. On demand. At speed. With quality that compounds instead of decays.

    The Ingredients Are Not the Product

    Here’s the mental model that changed everything: stop thinking about what you produce. Start thinking about what you can draw from.

    Right now, the Tygart Media operating system has ingredients scattered across five layers. A Notion workspace with six databases tracking every client, every task, every piece of knowledge ever captured. A BigQuery data warehouse with 925 embedded knowledge chunks and vector search. 27 WordPress sites with over 6,800 published posts — each one a node in a knowledge graph that gets smarter every time something new is published. A GCP compute cluster running Claude Code with direct access to every site’s database. And 40+ Claude skills that know how to do everything from SEO audits to image generation to taxonomy fixes to competitive pivots.

    None of those ingredients are a finished product. They’re flour, eggs, sugar, and a well-calibrated oven. The product is whatever someone orders.

    How It Actually Works

    A client needs 20 hyper-local articles grounded in real watershed data for Twin Cities restoration searches. The machine doesn’t pull from a shelf. It reaches for the content brief builder, the adaptive variant pipeline, the DataForSEO keyword intelligence layer, the WordPress REST API publisher, and the IPTC metadata injection system. Those ingredients combine — differently every time — to produce exactly what’s needed. Not approximately. Exactly.

    Someone wants featured images across 50 articles? The machine reaches for Vertex AI Imagen, the WebP converter, the XMP metadata injector, and the WordPress media uploader. One script. Every image generated, optimized, metadata-enriched, and published in under a minute each.

    The ingredients are the same. The output is infinitely variable.

    Why Inventory Thinking Fails at Scale

    The inventory model has a ceiling built into it. You can only pre-build as fast as one human can think, write, and publish. Every hour spent building inventory is an hour not spent improving the machine. And inventory decays — content ages, data goes stale, market conditions shift.

    The machine model inverts this. Every hour spent improving a skill, connecting an API, or enriching the knowledge base makes everything that comes after it better. The 20th article is better than the first — not because you practiced writing, but because the knowledge graph is 20 nodes richer, the internal linking map is denser, and the content brief builder has more competitive intelligence to draw from.

    This is the flywheel. The ingredients improve by being used.

    The Three-Tier Architecture

    The machine runs on three layers, each with a specific job.

    The first layer is the strategist — a live AI session that can reach out to any API, generate images with Vertex AI, publish to any WordPress site, query BigQuery, log to Notion, and compose social media drafts. It handles anything that involves calling an API or making a decision. It forgets between sessions, but carries the important context forward through a persistent memory system.

    The second layer is the field operator — a browser-based AI that can navigate any web interface, click through dashboards, type into terminals, and visually inspect what’s happening. It handles anything that requires a browser. GCP Console, DNS management, quota requests, visual QA.

    The third layer is the persistent worker — an AI that lives on the server itself, with direct access to every WordPress database, every file, every log. It doesn’t forget between sessions. It handles heavy operations that need to survive beyond a single conversation: bulk migrations, cross-site audits, scheduled content generation.

    Three layers. Three different tools. One machine.

    The Knowledge Compounds

    The part that most people miss about this model is the compounding effect. Every article published adds a node to the knowledge graph. Every SEO audit enriches the competitive intelligence layer. Every client conversation captured in Notion becomes a retrievable insight for the next brief. Every image generated trains the prompt library. Every taxonomy fix improves the next site’s information architecture.

    Nothing is wasted. Nothing sits idle. Every output becomes an input for the next request.

    This is why I stopped building inventory. The machine doesn’t need a warehouse. It needs raw materials, good pipes, and someone who knows which valve to turn.

    What This Means for Clients

    For the businesses we serve, this model means three things. First, speed — when you need content, you don’t wait for a writer to start from scratch. The machine draws from existing knowledge, existing competitive intelligence, and existing site architecture to produce faster and with more context than any human starting cold. Second, relevance — nothing is pre-written three weeks ago and scheduled for a date that may no longer make sense. Everything is built for right now, with right now’s data. Third, compounding quality — the 50th article on your site benefits from everything the first 49 taught the machine about your industry, your competitors, and your audience.

    No back stock. No stale inventory. Just a machine that gets better every time someone needs something.

    Frequently Asked Questions

    What is just-in-time content manufacturing?

    Just-in-time content manufacturing is an operational model where articles, images, and digital assets are assembled on demand from a growing base of knowledge systems, AI pipelines, and API connections — rather than pre-built and stored as inventory. Each deliverable is fabricated at the moment of need using the best available data and intelligence.

    How does a content machine differ from a content calendar?

    A content calendar pre-schedules fixed deliverables weeks in advance. A content machine maintains the ingredients and capabilities to produce any deliverable on demand. The calendar is rigid and decays; the machine is adaptive and compounds in quality over time as its knowledge base grows.

    What technologies power a just-in-time content system?

    A typical stack includes AI language models for content generation, vector databases for knowledge retrieval, WordPress REST APIs for publishing, image generation models for visual assets, and a project management layer like Notion for orchestration. The key is that these components are connected via APIs so they can be combined dynamically for any request.

    Does just-in-time content sacrifice quality for speed?

    The opposite. Because each piece draws from a growing knowledge base, competitive intelligence layer, and established site architecture, the quality compounds over time. The 50th article benefits from everything the first 49 taught the system. Pre-built inventory, by contrast, starts decaying the moment it’s created.

  • The Prompt Show: What Happens When the Audience Writes the Set

    The Prompt Show: What Happens When the Audience Writes the Set

    The Prompt Show: What Happens When the Audience Writes the Set

    Stand-up comedy has always been a broadcast. One person walks on stage with a set they’ve rehearsed in the mirror, in the car, in smaller rooms, and they deliver it to a crowd that showed up to receive. The audience laughs or they don’t. The comedian adjusts. But the fundamental architecture hasn’t changed since vaudeville: one person talks, everyone else listens.

    I want to break that.

    A Format Without a Set List

    Picture this. A comedian — or maybe we stop calling them that — signs up for a show. They have no material prepared. No bits. No callbacks. Nothing rehearsed. They walk out to a mic and a stool, and the only thing they bring is themselves.

    The audience brings everything else.

    Think Phil Donahue, not open mic night. The room is full of people who came with questions. Real questions. Some researched. Some spontaneous. Some designed to get a laugh, sure. But the best ones — the ones that make this format transcend — are the ones where somebody in the audience actually did their homework.

    Human Prompting

    Here’s where it gets interesting. Before the show, the audience gets access to information about the person behind the mic. Their hometown. Their college. Their favorite team. The job they had before comedy. The thing they lost. The thing they built. Whatever the performer is willing to put on the table.

    And the audience uses that information to craft questions.

    This is human prompting. The same principle that makes a great AI query — specificity, context, emotional intelligence, knowing what to ask and how to ask it — applied to a live human being standing under a spotlight. The audience becomes the prompt engineer. The performer becomes the model. And what comes back isn’t a rehearsed bit. It’s a story that has never been told on stage before, delivered raw, in real time, with the kind of energy you only get when someone is genuinely surprised by what they’re being asked.

    Three Modes, One Show

    The format has natural variation built in. You can run all three modes in a single evening, like acts in a play:

    Mode 1: Curated. Questions are submitted ahead of time and the best ones are selected by a producer or host. This gives the show a high floor — every question has been vetted for depth, creativity, or emotional potential. The performer still doesn’t know what’s coming, but the audience has been filtered for quality.

    Mode 2: Host-Selected. The host reads the room, sees hands go up, and picks. There’s a middle layer of curation happening in real time. The host becomes a DJ of human curiosity — reading energy, sequencing moments, knowing when to go deep and when to go light.

    Mode 3: Completely Random. Names drawn from a hat. Seat numbers called. No filter. This is the highest-risk, highest-reward mode. You might get someone who asks where the performer went to high school. You might get someone who asks about the worst night of their life. The unpredictability is the product.

    Why This Works Now

    We live in an era where everyone understands prompting, even if they don’t use that word. Every person who has typed a question into ChatGPT, refined a search query, or figured out how to ask Siri something useful has been training the muscle that this format requires. The audience already knows, instinctively, that the quality of the answer depends on the quality of the question.

    And we’re starving for unscripted humanity. Podcasts exploded because people wanted real conversation. Reality TV keeps mutating because people want to watch humans be human. But both of those formats have editing, production, post-processing. The Prompt Show has none of that. It’s one person, responding to a stranger’s curiosity, with nowhere to hide.

    The Performer Isn’t a Comedian Anymore

    This is the part that matters most. The person on stage doesn’t need to be funny. They need to be honest. They need to be present. They need to have lived a life worth asking about and be willing to talk about it without a script.

    Comedians are naturals for this because they already know how to hold a room. But this format is bigger than comedy. It’s a storyteller on a stool. It’s a retired firefighter. It’s a first-generation immigrant. It’s anyone whose life contains stories that only come out when the right question is asked by someone who cared enough to think about it.

    The magic isn’t in the answer. The magic is in the space between the question and the answer — that half-second where the performer realizes nobody has ever asked them that before, and they have to figure out, live, in front of a room full of strangers, what the truth actually is.

    What Makes a Good Prompter

    Not every question lands. The person who tries to stump the performer, who wants a gotcha moment, who treats this like a roast — they’ll get a laugh, maybe, but they won’t get a story. The audience will learn quickly that the best moments come from the person who spent fifteen minutes reading the performer’s bio and thought: I wonder what it was like to leave that town. I wonder if they ever went back.

    The best prompters are the ones who ask the question the performer didn’t know they needed to answer.

    This Is Live Poetry

    Call it what you want. A prompt show. A story pull. A human query. Whatever the name, the format is the same: give people a reason to be curious about another human being, give that human being a microphone and no script, and get out of the way.

    The best comedy has always been the truth told at the right speed. This format just lets the audience decide which truth, and when.


  • We Tested Google Flow for Brand Asset Production — Here’s What Actually Works

    We Tested Google Flow for Brand Asset Production — Here’s What Actually Works

    The Question Every Agency Is Asking

    If you run a content operation that serves multiple brands, you’ve probably looked at Google Flow and thought: could this actually replace part of our design pipeline? The image generation is impressive. The iteration feature — where you refine an image through successive prompts — is genuinely useful. But the question that matters for agency work isn’t “can it make pretty pictures.” It’s: can it maintain brand consistency across a production run?

    We spent a morning running controlled experiments to find out. The results reshape how we think about AI image generation for client work.

    What We Tested

    We created a fictional coffee brand (“Summit Brew Coffee Company”) with a distinctive mountain-and-coffee-cup logo in black and gold. Then we pushed Flow’s iteration system through three scenarios that mirror real agency workflows:

    Scenario 1: Brand persistence across applications. We took the logo from flat design → product mockup → merchandise collection → outdoor lifestyle shoot. Seven total iterations, each changing the context dramatically while asking the model to maintain the brand.

    Scenario 2: Element burn-in. We deliberately introduced a red baseball cap, iterated with it for three consecutive generations, then tried to remove it. This simulates the common problem of “I showed the client a concept with X, they don’t want X anymore, but the AI keeps putting X back in.”

    Scenario 3: Chain isolation. We started a completely separate iteration chain from a different logo variant within the same project. Does history from Chain A bleed into Chain B?

    The Three Findings That Change Our Workflow

    1. Brand Fidelity Is Surprisingly High — 9/10 Across 7 Iterations

    The Summit Brew mountain icon, typography, and gold/black color scheme maintained recognizable consistency from flat logo all the way through to an outdoor campsite product shoot. Minor proportion drift in the icon (maybe 10%), but the brand was immediately identifiable in every single output. For mockup and concept work, this is production-ready fidelity.

    2. Nothing Burns In Before 3 Iterations — Probably Closer to 5-8

    The baseball cap was cleanly removable after appearing in three consecutive iterations. Both the cap and a coffee mug were stripped out with a single well-crafted removal prompt. This is huge for agency work — it means you can explore directions with clients, change your mind, and the AI will cooperate. The key is using explicit positive framing (“show ONLY the bag”) alongside negative instructions (“no hat, no cap”).

    3. Iteration Chains Are Completely Isolated

    This is the most operationally significant finding. Chain B had zero contamination from Chain A. No red caps, no coffee mugs, no campsite. The logo style from Chain B’s source image was preserved perfectly. Each image in your project grid has its own independent memory. The project is just an organizational container.

    The Operational Playbook We’re Now Using

    Based on these findings, here’s the workflow we’ve adopted for client brand asset production:

    Step 1: Generate your anchor asset. Create the logo or hero image. Generate 4 variants, pick the best one.

    Step 2: Keep chains short. 3-5 iterations maximum per chain. At this depth, everything remains controllable.

    Step 3: Branch for each application. Logo → product mockup is one chain. Logo → social media banner is a new chain. Logo → billboard is a new chain. The isolation means each application gets a clean start with no baggage.

    Step 4: Use Ingredients for cross-chain consistency. Flow’s @ referencing system lets you lock a brand asset as a reusable Ingredient. This is your AI brand guide — reference it in every new chain to maintain identity.

    Step 5: Never fight the model past 5 iterations. If artifacts are persisting despite removal prompts, don’t iterate further. Save your best output, start a fresh chain from it, and you’ll have a clean slate.

    What This Means for Agency Economics

    Image generation in Flow is free (0 credits for Nano Banana 2). The iteration system is fast (20-30 seconds per batch of 4). And the brand consistency is high enough for mockup, concept, and internal review work. This doesn’t replace a senior designer for final deliverables, but it compresses the concepting and iteration phase from hours to minutes.

    For agencies managing 10+ brands, the combination of chain isolation and Ingredient locking means you can run parallel brand pipelines without any risk of cross-contamination. That’s a workflow that didn’t exist six months ago.

    The full technical white paper with detailed methodology is available upon request.

    {
    “@context”: “https://schema.org”,
    “@type”: “Article”,
    “headline”: “We Tested Google Flow for Brand Asset Production — Heres What Actually Works”,
    “description”: “We ran controlled experiments on Google Flow’s iteration system to answer the question every agency needs answered: can AI maintain brand consistency acro”,
    “datePublished”: “2026-04-03”,
    “dateModified”: “2026-04-03”,
    “author”: {
    “@type”: “Person”,
    “name”: “Will Tygart”,
    “url”: “https://tygartmedia.com/about”
    },
    “publisher”: {
    “@type”: “Organization”,
    “name”: “Tygart Media”,
    “url”: “https://tygartmedia.com”,
    “logo”: {
    “@type”: “ImageObject”,
    “url”: “https://tygartmedia.com/wp-content/uploads/tygart-media-logo.png”
    }
    },
    “mainEntityOfPage”: {
    “@type”: “WebPage”,
    “@id”: “https://tygartmedia.com/google-flow-brand-asset-production-testing/”
    }
    }