Tag: Agency Growth

  • Notion SOP System: How We Document Everything Across Multiple Business Lines

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    Most SOP systems fail not because the SOPs are bad but because nobody can find them when they need them. They live in a Google Doc that was shared once, in a Notion page buried three levels deep, or in someone’s head because the written version was never kept current. The system exists on paper and nowhere else.

    We run SOPs for every repeatable process across multiple business lines — content publishing workflows, client onboarding steps, quality control checks, platform-specific operating rules. All of it lives in Notion, structured so that a person or an AI can find the right SOP in seconds and trust that it reflects how the work actually gets done today.

    This is how that system is built.

    What is a Notion SOP system? A Notion SOP system is a structured collection of standard operating procedures stored in Notion, organized so they are findable by context, searchable by keyword, and maintainable without a dedicated document owner. Unlike a folder of static documents, a well-built Notion SOP system is a living knowledge base that updates as the operation evolves.

    Why Notion Works Well for SOPs

    SOPs need to be three things: findable, readable, and maintainable. Notion handles all three better than most alternatives.

    Findable: Notion’s database structure lets you tag SOPs by entity, process type, and status, then filter to find exactly what you need. A filtered view showing all active SOPs for a specific business line is one click. A search across the entire SOP library is instant.

    Readable: Notion’s page format supports the structure SOPs actually need — numbered steps, toggle blocks for detail, callout boxes for warnings, tables for decision logic. The reading experience is better than a Google Doc and far better than a shared spreadsheet.

    Maintainable: Because SOPs live in a database, you can see at a glance which ones haven’t been verified recently, which are marked as drafts, and which are flagged for review. The metadata makes maintenance auditable rather than aspirational.

    The SOP Database Structure

    Every SOP in our system is a record in a single database — the Knowledge Lab. It’s not a folder of pages. It’s a database where each SOP is a row with properties that make it queryable.

    The core properties on each SOP record:

    Doc Name — the title of the SOP, written as a plain description of what the procedure covers. “Content Pipeline — Publishing Sequence” not “Publishing SOP v3.”

    Type — whether this is an SOP, an architecture decision, a reference document, or a session log. SOPs are filtered separately from other knowledge types.

    Entity — which business line or client this SOP belongs to. Allows filtering to show only the SOPs relevant to the current context.

    Layer — what kind of decision this documents. Options: architecture-decision, operational-rule, client-specific, platform-specific. Helps distinguish “how we always do this” from “how we do this for this one client.”

    Status — evergreen, active, draft, deprecated. Evergreen SOPs are procedures that don’t change often and can be trusted as written. Active SOPs are current but may be evolving. Draft SOPs are being written or tested. Deprecated SOPs are kept for reference but no longer in use.

    Last Verified — the date the SOP was last confirmed to reflect current practice. Any SOP with a Last Verified date more than 90 days ago gets flagged for review in the weekly system health check.

    How SOPs Are Written

    The format matters as much as the content. An SOP that buries the key step in paragraph four will be ignored in favor of asking someone who knows. We follow a consistent structure for every SOP:

    One-line summary at the top. What this procedure is for and when to use it. Readable in five seconds.

    Trigger conditions. What situation prompts someone to follow this SOP. Specific enough that there’s no ambiguity about whether this is the right document.

    Numbered steps. One action per step. Steps that require judgment get a callout box explaining the decision logic. Steps that have common failure modes get a warning callout explaining what goes wrong and how to catch it.

    Hard rules section. Any non-negotiable constraints — things that are never done, always done, or require explicit sign-off before proceeding. These get their own section at the bottom so they’re easy to find without reading the full procedure.

    Last updated note. Who verified this and when. Simple accountability that makes the maintenance question answerable.

    The Machine-Readable Layer

    Every SOP in our system carries a JSON metadata block at the very top of the page — before any human-readable content. This block follows a consistent structure that makes the SOP readable not just by people but by Claude during a live session.

    The metadata block includes the page type, status, a two-to-three sentence summary of what the SOP covers, the entities it applies to, any dependencies on other SOPs or documents, and a resume instruction — a single sentence describing the most important thing to know before executing this procedure.

    In practice, this means Claude can fetch an SOP mid-session, read the metadata block, and understand the procedure’s constraints and intent without reading the full document. For a system running dozens of active SOPs, this makes the difference between Claude operating on institutional knowledge and Claude operating on guesswork.

    Finding the Right SOP in the Right Moment

    The best SOP system is one you actually use when you need it. That requires the right SOP to be findable in under thirty seconds — not after a search, three clicks, and a scan of an unfamiliar page structure.

    We solve this with two mechanisms. First, a master SOP index — a filtered database view showing all active and evergreen SOPs, sorted by entity and process type, with one-line summaries visible in the list view. Opening the index and scanning it takes fifteen seconds. Second, the Claude Context Index includes every SOP by title and summary, so Claude can surface the right one during a session without a manual search.

    Both mechanisms depend on the same underlying structure: consistent naming, accurate status tags, and current summaries. The index is only as good as the metadata behind it.

    Keeping SOPs Current

    The maintenance problem is real. SOPs written accurately in January are often wrong by April — not because anyone changed them, but because the operation evolved and nobody updated the documentation.

    Our approach: the weekly system health review includes a check for any SOP with a Last Verified date more than 90 days old. Those get flagged for a five-minute review — read the procedure, compare it to how the work actually gets done, update if needed, reset the Last Verified date. Most reviews result in no changes. A few result in small updates. Occasionally one reveals a significant drift that needs a full rewrite.

    The 90-day cycle keeps the system from drifting too far before the problem is caught. It also makes SOP maintenance a predictable overhead rather than an occasional emergency project.

    When a New SOP Gets Written

    Not every procedure needs an SOP. We write a new SOP when a procedure meets two criteria: it will be repeated more than three times, and getting it wrong has a real cost — either in time, quality, or client relationship.

    One-off tasks don’t get SOPs. Simple two-step procedures that any competent operator would handle correctly without documentation don’t get SOPs. The SOP library should be comprehensive but not exhaustive — a collection of genuinely useful reference documents, not a compliance exercise.

    When a new SOP is warranted, we write it immediately after the first time we execute the procedure correctly — while the steps are fresh and the edge cases are visible. SOPs written from memory weeks later are usually missing exactly the details that matter most.

    SOPs as Training Infrastructure

    A well-maintained SOP library has a secondary function beyond daily operations: it’s the training infrastructure for anyone new joining the operation, or for handing off work to an AI agent running a process for the first time.

    When a new person joins, the SOP library is the answer to “how do we do things here?” — not a shadowing exercise or an informal knowledge transfer, but a structured, searchable, current reference that covers the actual procedures. When Claude is tasked with executing a process it hasn’t run before, the SOP is what it reads first.

    This dual function is why the investment in documentation quality pays off beyond the obvious. The SOP isn’t just for today’s operation — it’s the institutional knowledge layer that makes the operation transferable, scalable, and less dependent on any one person’s memory.

    Want this built for your operation?

    We build Notion SOP systems and full Knowledge Lab architectures — structured, machine-readable, and maintained to actually stay current.

    Tygart Media runs this system across multiple business lines. We know what makes an SOP library useful versus aspirational.

    See what we build →

    Frequently Asked Questions

    How many SOPs does a small agency need?

    A small agency running five to fifteen active clients typically needs fifteen to forty SOPs covering the core operational procedures — onboarding, content production, quality control, client communication, platform-specific rules, and system maintenance. More than sixty SOPs in an operation of that size usually indicates over-documentation: procedures that don’t need to be written down are getting written down.

    What’s the difference between an SOP and a checklist in Notion?

    A checklist is a reminder of what to do. An SOP explains how to do it, why each step matters, what to do when something goes wrong, and what the non-negotiable constraints are. Checklists work well for simple procedures with no decision points. SOPs work well for procedures with judgment calls, common failure modes, or significant consequences if done incorrectly. Most operations need both.

    Should SOPs be pages or database records in Notion?

    Database records. A page is a standalone document with no queryable properties. A database record is a document with structured metadata — status, entity, type, last verified date — that makes it filterable, sortable, and auditable. The operational overhead of maintaining SOPs as database records rather than loose pages pays off quickly once you need to find all active SOPs for a specific context or identify which ones haven’t been reviewed recently.

    How do you prevent SOPs from becoming outdated?

    Build the review into a regular rhythm rather than relying on ad hoc updates. A Last Verified date property on each SOP, combined with a weekly or monthly check for records older than a set threshold, creates a systematic maintenance loop. SOPs that are never reviewed drift silently — the regular review cycle catches drift before it causes operational problems.

    Can Claude use Notion SOPs during a live session?

    Yes, with the right setup. Claude can fetch a Notion page via the Notion MCP integration and read its content mid-session. SOPs written with a consistent metadata block at the top — a structured summary, trigger conditions, and key constraints — are especially effective because Claude can orient itself quickly without reading the full document. This is what makes a Notion SOP system genuinely useful for AI-native operations rather than just human reference.

  • Notion + Claude AI: How to Use Claude as Your Notion Operating System

    Notion + Claude AI: How to Use Claude as Your Notion Operating System

    Claude AI · Fitted Claude

    Notion is where the work lives. Claude is what thinks about it. That’s the simplest way to describe the integration — not Claude as a chatbot you open in a separate tab, but Claude as an active layer that reads your Notion workspace, reasons about what’s in it, and acts on it in real time.

    Most people using both tools treat them as separate. They take notes in Notion, then copy and paste context into Claude when they need help. That works, but it’s not an integration — it’s a clipboard operation. What we run is different: a structured Notion architecture that Claude can navigate directly, combined with a metadata standard that makes every key page machine-readable across sessions.

    This is how that system actually works.

    What does it mean to use Claude as a Notion operating system? Using Claude as a Notion OS means structuring your Notion workspace so Claude can fetch, read, and act on its contents during a live session — without you manually copying context. Your Notion workspace becomes Claude’s working memory: it knows where your SOPs live, what your current priorities are, and what decisions have already been made.

    Why the Default Approach Breaks Down

    The standard way people use Claude with Notion: open Claude, describe the project, paste in relevant content, do the work, close the session. Next session, start over.

    Claude has no memory between sessions by default. Every conversation starts from zero. If your operation has any meaningful complexity — multiple clients, ongoing projects, established decisions and constraints — rebuilding that context from scratch every session is expensive. It costs time, it introduces errors when you forget to mention something relevant, and it means Claude is always operating with incomplete information.

    The fix is not to paste more context. The fix is to architect your Notion workspace so Claude can retrieve the context it needs, when it needs it, without you managing that transfer manually.

    The Metadata Standard That Makes It Work

    The foundation of the integration is a consistent metadata structure at the top of every key Notion page. We call this standard claude_delta. Every SOP, architecture decision, project brief, and client reference document in our Knowledge Lab starts with a JSON block that looks like this:

    {
      "claude_delta": {
        "page_id": "unique-page-id",
        "page_type": "sop",
        "status": "evergreen",
        "summary": "Two to three sentence plain-language description of what this page contains and when to use it.",
        "entities": ["relevant business", "relevant project", "relevant tool"],
        "dependencies": ["other-page-id-this-depends-on"],
        "resume_instruction": "The single most important thing Claude needs to know to continue work on this topic without re-reading the entire page.",
        "last_updated": "2026-04-12T00:00:00Z"
      }
    }

    The metadata block serves two purposes. First, it gives Claude a structured, consistent entry point to any page — the summary and resume instruction mean Claude can orient itself in seconds rather than reading thousands of words. Second, it makes the page indexable: when we need to find the right page for a given task, Claude can scan metadata blocks rather than full page content.

    The Claude Context Index

    The metadata standard only works if Claude knows where to start. The Claude Context Index is a master registry page in our Notion workspace — the first thing Claude fetches at the start of any session that involves the knowledge base.

    The index contains a structured list of every major knowledge page: its title, page ID, page type, status, and a one-line summary. When Claude reads the index, it knows what exists, where it is, and which pages are relevant to the current task — without having to search or guess.

    In practice, a session starts like this: “Read the Claude Context Index and then let’s work on [task].” Claude fetches the index, identifies the relevant pages for that task, fetches those pages, and begins work with full context. The context transfer that used to take ten minutes of copy-paste happens in seconds.

    What Claude Can Actually Do Inside Notion

    With the Notion MCP (Model Context Protocol) integration active, Claude can do more than read — it can write back to Notion directly during a session. In our operation, Claude routinely:

    Creates new knowledge pages — when a session produces a decision, an SOP, or a reference document worth keeping, Claude writes it to Notion with the claude_delta metadata already applied. The knowledge base grows automatically as work happens.

    Updates project status — when a content piece is published, Claude logs the publication in the Content Pipeline database. When a task is complete, Claude marks it done. The databases stay current without a separate manual logging step.

    Reads SOPs mid-session — if a session reaches a step with an established procedure, Claude fetches the relevant SOP rather than improvising. This enforces consistency across sessions and across different types of work.

    Scans the task database — at the start of a working session, Claude can read the current P1 and P2 task list and surface anything that should be addressed before the session’s primary work begins.

    The Persistent Memory Layer

    The hardest problem in running an AI-native operation is context persistence. Claude’s context window is large but finite, and it resets between sessions. For any operation with meaningful ongoing complexity, that reset is a real problem.

    Our solution is a three-layer memory architecture:

    Layer 1: Notion Knowledge Lab. Human-readable SOPs, architecture decisions, project briefs, and reference documents. Claude fetches these at session start. Persistent across all sessions indefinitely.

    Layer 2: BigQuery operations ledger. A machine-readable database of operational history — what was published, what was changed, what decisions were made, and when. Claude can query this layer for operational data that would be too verbose to store in Notion pages. Currently holds several hundred knowledge pages chunked and embedded for semantic search.

    Layer 3: Session memory summaries. At the end of a significant session, Claude writes a summary of what was decided and done to a Notion session log page. The next session can start by reading the most recent session log, picking up exactly where the previous session ended.

    Together these three layers mean Claude never truly starts from zero — it has access to the institutional knowledge of the operation, the operational history, and the most recent session context.

    Building This for Your Own Operation

    The full architecture takes time to build correctly, but the core of it — the metadata standard and the Context Index — can be implemented in a few hours and provides immediate value.

    Start with five to ten of your most important Notion pages: your key SOPs, your main project references, your client guidelines. Add a claude_delta metadata block to the top of each. Create a simple index page that lists them with their IDs and summaries. Then start your next Claude session by telling Claude to read the index first.

    The difference in session quality is immediate. Claude operates with context it would otherwise need you to provide manually, makes decisions consistent with your established constraints, and produces output that fits your actual operation rather than a generic interpretation of it.

    From there, you can layer in the Notion MCP integration for write-back capability, build out the BigQuery knowledge ledger for operational history, and develop the session logging practice for continuity. But the metadata standard and the index are where the leverage is — everything else builds on top of them.

    What This Is Not

    This is not a plug-and-play integration. Notion’s native AI features and Claude are different products — Notion AI is built into the Notion interface and works on your pages directly, while Claude operates via API or the claude.ai interface with Notion access layered on through MCP. The architecture described here is a custom implementation, not a feature you turn on.

    It also requires discipline to maintain. The metadata standard only works if every important page follows it. The Context Index only works if it’s kept current. The session logs only work if they’re written consistently. The system degrades quickly if the documentation practice slips. That maintenance overhead is real — budget for it explicitly or the architecture will drift.

    Want this set up for your operation?

    We build and configure the Notion + Claude architecture — the metadata standard, the Context Index, the MCP integration, and the session logging system — as a done-for-you implementation.

    We run this system live in our own operation every day. We know what breaks without proper architecture and how to build it to last.

    See what we build →

    Frequently Asked Questions

    Does Claude have native Notion integration?

    Claude can connect to Notion through the Model Context Protocol (MCP), which allows it to read and write Notion pages and databases during a live session. This is not a built-in feature that requires no setup — it requires configuring the Notion MCP server and connecting it to your Claude environment. Once configured, Claude can fetch, create, and update Notion content directly.

    What is the difference between Notion AI and Claude in Notion?

    Notion AI is Anthropic-powered AI built natively into the Notion interface — it works directly on your pages for tasks like summarizing, drafting, and Q&A over your workspace. Claude operating via MCP is a separate implementation where Claude, running in its own interface, connects to your Notion workspace as an external tool. The MCP approach gives Claude more operational flexibility — it can combine Notion data with other tools, write complex logic, and operate across a full session — but requires more setup than Notion AI’s native features.

    What is the claude_delta metadata standard?

    Claude_delta is a JSON metadata block added to the top of key Notion pages that makes them machine-readable for Claude. It includes the page type, status, a plain-language summary, relevant entities, dependencies, a resume instruction for picking up work in progress, and a timestamp. The standard makes it possible for Claude to orient itself to any page quickly and consistently, without reading the full content every time.

    Can Claude write back to Notion automatically?

    Yes, with the Notion MCP integration active. Claude can create new pages, update existing records, add database entries, and modify page content during a session. This enables workflows where Claude logs its own outputs — publishing records, session summaries, decision logs — directly to Notion without a manual step.

    How do you handle Claude’s context limit with a large Notion workspace?

    The metadata standard and Context Index approach addresses this directly. Rather than loading the entire workspace into context, Claude fetches only the pages relevant to the current task. The index tells Claude what exists; the metadata tells Claude whether a page is worth fetching in full. For operational history too large for context, a separate database layer (we use BigQuery) handles storage and semantic retrieval, with Claude querying it for specific data rather than ingesting it wholesale.

  • Notion Client Portal Setup for Agencies: How We Build Ours

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    Most agency client portals are either too complicated to maintain or too bare to be useful. A shared Google Drive folder isn’t a portal. A ClickUp guest view requires the client to learn ClickUp. A custom-built portal requires a developer. Notion sits in the middle — flexible enough to build something professional, simple enough that clients can actually use it without training.

    This is how we build Notion client portals for our own operation. Not a template walkthrough — a description of the actual architecture, what we include, what we leave out, and why.

    What is a Notion client portal? A Notion client portal is a shared Notion page or workspace section that gives a client controlled visibility into their project — deliverables, timelines, assets, and communication — without exposing the rest of your internal operation. It functions as a lightweight client-facing dashboard built inside your existing Notion workspace.

    What a Notion Client Portal Actually Needs to Do

    Before building anything, it helps to be clear about what the portal is for. In our operation, a client portal has three jobs:

    Reduce inbound questions. If a client can see where their project stands without emailing, they will. A well-structured portal cuts “what’s the status?” messages significantly.

    Create a delivery record. Every deliverable — article, report, strategy doc — has a logged home. When a client asks what was delivered in March, the answer is one click away.

    Protect internal operations. The portal is a window, not a door. Clients see what’s relevant to them. They don’t see your internal task database, your pricing notes, your other clients, or your operational SOPs.

    The Core Portal Structure

    Every client portal we build follows the same structural template, customized by scope. The core components are:

    Project Status Dashboard

    A simple table or board view showing the current state of all active deliverables. Columns: deliverable name, status (In Progress / Review / Delivered), due date, and a link to the asset. Clients can see at a glance what’s moving and what’s done without needing to ask.

    This view is a filtered view of our internal Content Pipeline database — the client sees only their rows, not the full database. We use Notion’s filter-by-property feature to scope the view to their entity tag. They get a live view of their work without any access to the broader pipeline.

    Deliverables Library

    A running archive of everything completed and delivered. Articles, audits, reports, strategy documents — each as a linked page or embedded file. Organized by month. This solves the “can you resend that?” problem permanently and gives clients a sense of the body of work accumulating over a retainer.

    Communication Log

    A simple chronological page where significant decisions, feedback rounds, and strategic pivots get logged. Not a chat — a record. When a client says “I thought we decided X,” the communication log is the answer. This protects both parties and reduces scope creep from memory drift.

    Reference Documents

    Brand guidelines, target keyword lists, approved personas, style notes — anything the client has provided or that governs the work. Stored here so the answer to “do we have their brand guide?” is always yes.

    Next Steps

    A short, always-current list of what happens next. Three to five items max. What we’re working on, what we need from them, and when they can expect the next delivery. Clients check this more than anything else in the portal.

    How Access and Permissions Work

    Notion’s sharing model for client portals works at the page level, not the database level. This is the key architectural decision that determines how isolated the portal actually is.

    The correct approach: build the client portal as a standalone page that is not a child of your main Command Center. Share that page with the client via email invite at the “Can view” or “Can comment” level. The portal contains only filtered views and manually duplicated content — never direct database access.

    What to avoid: sharing a database directly with a client, even with filters applied. Notion’s permissions model allows determined users to remove filters from shared database views, exposing rows you didn’t intend to share. Always use a standalone page with embedded filtered views, not a raw database share.

    The Air-Gap Principle

    We call our approach to client portals “air-gapped” — the portal is architecturally separated from the internal operation even though it draws from the same underlying data.

    In practice, this means the portal page never has a back-link to the Command Center. The filtered views are set up so the client can see their data but cannot navigate to the parent database. Any document shared in the portal is either a shared Notion page with its own permissions or an exported file — never a raw internal page with full internal linking.

    The air gap matters because Notion’s page graph is navigable. If you share a page that contains a link to an internal page the client shouldn’t see, they can follow that link if it’s not properly permissioned. Build the portal as if it’s a separate product, even if it isn’t.

    What Not to Put in a Client Portal

    Equally important as what to include: what to leave out.

    Internal task notes. Your notes about why something is late, what went wrong, or what you think about the brief belong in your internal system, not in a client-visible page.

    Pricing and contract details. These live in your Revenue Pipeline and are shared via PDF or dedicated document — not embedded in an operational portal.

    Other clients’ work. Obvious, but worth stating explicitly given how easy it is to accidentally link across projects in a shared workspace.

    Unfinished deliverables. The portal is a delivery mechanism, not a work-in-progress view. Drafts go into the portal when they’re ready for client review, not before.

    Maintaining Portals at Scale

    The main friction with Notion client portals at scale is maintenance overhead. If you’re running ten or more active clients, keeping ten portals current manually is a real time cost.

    The solution is to minimize what requires manual updating. The Project Status Dashboard and Deliverables Library should pull from your internal pipeline database via filtered views — when you update the internal record, the portal updates automatically. The only things requiring manual attention are the Communication Log and Next Steps, which genuinely need a human decision about what to write.

    In our operation, portal maintenance takes roughly five minutes per client per week — the time it takes to update Next Steps and log any significant decisions from that week’s work. Everything else is live from the internal system.

    When Notion Portals Work Well and When They Don’t

    Notion client portals work well for content agencies, SEO operations, strategy consultants, and any service business where the deliverables are primarily documents. The portal model fits naturally when what you’re delivering is readable, linkable, and accumulates over time.

    They work less well for project-heavy engagements where the client needs to interact with tasks, leave comments on specific items, or participate in the workflow. For those cases, a purpose-built client portal tool — or a dedicated shared Notion workspace rather than a view-only portal — is a better fit. Notion can support collaborative client workspaces, but it requires a different architecture than the air-gapped portal model described here.

    Want this built for your agency?

    We set up Notion client portals and full Command Center architectures for agencies — configured for your operation, not a template to customize yourself.

    Tygart Media runs this system live across multiple active clients. We know what the build process looks like and what breaks without proper architecture.

    See what we build →

    Frequently Asked Questions

    Can clients edit content in a Notion client portal?

    Yes, if you give them “Can edit” or “Can comment” permissions. For most agency relationships, “Can comment” is the right level — clients can leave feedback directly on pages without being able to accidentally delete or restructure content. “Can view” works for portals that are purely informational delivery mechanisms.

    Is it safe to share a Notion database view with a client?

    With caution. Filtered database views can have their filters removed by users with edit access. For client-facing portals, use standalone pages with embedded filtered views set to view-only, rather than sharing the database itself. This is the air-gap approach — the client sees the data but cannot access the underlying database structure.

    How do you handle multiple clients in one Notion workspace?

    Each client gets their own portal page, shared individually. Internally, all client data lives in shared databases partitioned by an entity or client tag. Filtered views in each portal show only that client’s records. Clients never see each other’s portals or data because each portal is a separately permissioned page.

    What’s the difference between a Notion client portal and a shared Notion workspace?

    A client portal is a view-only or comment-only window into your operation — the client sees deliverables and status but doesn’t work inside Notion alongside you. A shared workspace is a collaborative environment where both agency and client actively use Notion together. Portals are simpler to maintain and better for most agency relationships. Shared workspaces make sense for longer-term, higher-touch engagements where the client is an active participant in the work.

    How long does it take to set up a Notion client portal?

    A well-structured portal takes two to four hours to build from scratch for the first client. Once you have a working template, duplicating and customizing it for additional clients takes thirty to sixty minutes. The time investment is in designing the architecture correctly the first time — portals built without a clear structure tend to get abandoned within a few months.

  • How I Run 27 Client Sites from One Notion Command Center

    The Agency Playbook
    TYGART MEDIA · PRACTITIONER SERIES
    Will Tygart
    · Senior Advisory
    · Operator-grade intelligence

    I run 27 client WordPress sites from a single Notion workspace. No project management software, no agency platform, no dedicated CRM. Just Notion — architected deliberately across six interconnected databases — handling task triage, content pipelines, client relationships, revenue tracking, and the knowledge infrastructure that feeds an AI-native content operation.

    This is not a productivity tutorial. This is a description of a real system, built over two years, that runs across seven distinct business entities simultaneously. If you’re an agency owner, solo operator, or content business trying to figure out how to use Notion for something more serious than a to-do list, this is what the other end of that road looks like.

    What is a Notion Command Center? A Notion Command Center is a multi-database workspace architecture that functions as a single operating system for a business or portfolio of businesses. Rather than using Notion as a note-taking app, a Command Center connects tasks, clients, content, and knowledge into a unified system with defined workflows, priority rules, and daily operating rhythms.

    Why Notion Instead of Dedicated Agency Software

    The honest answer: I tried the alternatives. ClickUp has more native project management features. Asana handles task dependencies better out of the box. Monday.com is more polished for client-facing views.

    None of them let me build exactly the system my operation requires. And at the scale I’m running — 27 client sites, seven business entities, a live AI publishing pipeline — the ability to customize the architecture matters more than any individual feature.

    Notion also has a meaningful advantage that most people underestimate: it integrates with Claude natively. My entire operation runs on Claude as the AI layer, and a Notion workspace structured correctly becomes something Claude can read, reason about, and act on. That combination — Notion as the OS, Claude as the intelligence — is what makes this a genuinely AI-native operation rather than just an AI-assisted one.

    The 6-Database Architecture

    The Command Center runs on six core databases. Everything else in the workspace is either a view of these databases, a child page underneath them, or a standalone reference document. The six databases are:

    1. Master Actions

    Every task across all seven entities lives here. Priority levels run P1 (revenue or reputation at risk today) through P4 (delegate or kill). Each task carries an Entity tag, a Status, a Due Date, and a linked record in whichever other database it belongs to — a client, a content piece, a deal.

    The daily operating rule: never more than five tasks marked “Next Up” across the entire workspace at once. If your Next Up list has eight items, something is mislabeled. P1 means the thing doesn’t get done and real consequences follow today.

    2. Content Pipeline

    Every article across all 27 client sites flows through this database before it hits WordPress. Status stages run from Brief → Draft → Optimized → Scheduled → Published. The database links to the client entity, carries the target keyword, the target site URL, word count, and a publication date.

    Nothing publishes without a Notion record. This is a hard rule established after the alternative — articles written in sessions and pushed directly — created audit gaps that took hours to resolve. Notion first, WordPress second.

    3. Revenue Pipeline

    Client deals, proposals, and retainer renewals. Stage-based (Lead → Qualified → Proposal Sent → Active → Renewal). Links to the Master CRM for contact records. The weekly review checks whether any deal has sat in the same stage for more than seven days without activity — that’s a warning sign that gets flagged.

    4. Master CRM

    Every contact across all seven entities. Clients, prospects, golf league members, partners, vendors. Tagged by entity, relationship type, and last contact date. The weekly review catches anyone who should have heard from me and didn’t.

    5. Knowledge Lab

    SOPs, architecture decisions, session logs, and reference documents. This is where the institutional knowledge lives — the things that would take hours to reconstruct if I had to start from scratch. The Knowledge Lab uses a metadata standard (I call it claude_delta) that makes every page machine-readable, so Claude can fetch and reason about the content in a live session without losing context.

    6. William’s HQ

    The daily dashboard. A filtered view of P1 and P2 tasks due today or overdue, the content queue for the next 48 hours, and the inbox triage. This is the page that opens first every morning. Everything else in the system is accessed from here.

    The Seven Entity Structure

    The system manages seven distinct business entities, each with its own Focus Room — a sub-page containing that entity’s active projects, open tasks filtered by entity tag, and key reference documents. The entities are:

    • The parent agency — managing all client sites and retainer relationships
    • Personal brand — direct services, thought leadership, and new business
    • Client A — content operation for a contractor in a regional market
    • Client B — content operation for a service business in a metro market
    • Industry network — B2B community and event operation
    • Content property — topical authority site in a specific vertical
    • Personal — finances, health commitments, personal projects

    The entity structure means a task logged under “a regional client content operation” never bleeds into the the parent agency content queue. The databases are shared, but the entity tag acts as a partition. This matters operationally when you’re switching contexts fifteen times a day — the system tells you where you are and what belongs there.

    The Daily Operating Rhythm

    The Command Center only works if you use it on a rhythm. Mine runs on three loops:

    Morning Triage (10–15 minutes)

    Open William’s HQ. Zero the inbox — every untagged item gets a priority, a status, and an entity. Read the P1 and P2 list. Mentally commit to the top three. Check the content queue for anything publishing in the next 48 hours that isn’t scheduled. That’s a P1 fix before anything else happens.

    End-of-Day Close (5 minutes)

    Mark done tasks complete. Push anything untouched but intended — update the due date or reprioritize down. Check the content queue for tomorrow’s publications. If anything new was created during the day — a contact, a content piece, a deal — verify it’s logged in the right database with the right entity tag.

    Weekly Review (30 minutes, Sunday evening)

    Revenue: any deal stuck in the same stage as last week? Content: next week’s queue fully populated? Tasks: archive all Done tasks older than 14 days. Relationships: anyone who should have heard from me and didn’t? System health: any automation that failed silently?

    The weekly review is the repair mechanism. It catches the things the daily rhythm misses and resets the system before the next week compounds the drift.

    How Claude Plugs Into This

    The Knowledge Lab’s claude_delta metadata standard is what makes the Notion–Claude integration functional rather than theoretical. Every page in the Knowledge Lab carries a JSON metadata block at the top that tells Claude the page type, status, summary, key entities, and a resume instruction for picking up work in progress.

    In practice, this means I can start a session by telling Claude to read a specific Knowledge Lab page, and Claude has enough structured context to continue from exactly where the last session ended — without me re-explaining the project, the client, the constraints, or the decisions already made. The Notion workspace functions as persistent memory across Claude sessions.

    This is the part of the architecture that most people haven’t built yet. Notion as a note-taking app is one thing. Notion as a structured knowledge layer that an AI can navigate and act on is a meaningfully different proposition — and it’s the direction serious operators are moving.

    What This Architecture Costs to Build

    Honest answer: the architecture itself took about three months of active iteration to stabilize. The first version had too many databases, unclear relationships between them, and no real operating rhythm to enforce the discipline. The current version is the result of tearing down and rebuilding twice.

    The tooling cost is low. Notion’s Plus plan at $10/month per member handles everything described here. The BigQuery knowledge ledger that backs the AI memory layer runs on Google Cloud at effectively zero cost at this scale. Claude API usage for content operations runs roughly $50–150/month depending on session volume.

    What actually costs something is the setup time and the learning curve of building databases that relate to each other correctly. Most Notion setups fail not because the tool is limited but because the architecture wasn’t designed before the databases were created.

    Whether This Is Right for Your Agency

    The Command Center architecture works well for solo operators and small agencies managing multiple clients or business lines simultaneously. It works especially well when you’re running an AI-native content operation and need Notion to function as more than task management.

    It’s not the right choice if you need strong native time-tracking, Gantt charts, or client-facing portals that look polished without customization. Those cases have better-suited tools.

    But if you’re running a content agency, a multi-client SEO operation, or any business where the work is primarily knowledge work — briefs, articles, strategies, SOPs, client communications — and you want one system that sees all of it, the 6-database Command Center architecture is worth the build time.

    Want this built for your operation?

    We set up Notion Command Centers for agencies and operators — the full architecture, configured and documented, not a template to figure out yourself.

    Tygart Media has built and runs this system live across 27 client sites. We know what the setup process actually looks like.

    See what we build →

    Frequently Asked Questions

    How many databases does a Notion Command Center need?

    A functional Command Center for an agency or multi-client operation typically needs six core databases: a task database, a content pipeline, a revenue pipeline, a CRM, a knowledge base, and a daily dashboard. More than eight databases usually indicates an architecture problem — complexity that should be handled with views and filters, not additional databases.

    Can Notion handle 27 client sites without getting slow?

    Yes, with proper architecture. The key is using filtered views rather than separate databases for each client, and keeping database page counts manageable by archiving completed records regularly. Notion’s performance degrades when a single database exceeds a few thousand active records — archive aggressively and it stays fast.

    How does Notion integrate with Claude AI?

    Notion and Claude integrate through structured page formatting and the Notion API. By standardizing metadata at the top of key pages — page type, status, summary, key entities — Claude can fetch and interpret Notion content in a live session. More advanced setups use the Notion API to read and write records programmatically during Claude sessions, effectively making Notion the persistent memory layer for AI operations.

    What’s the difference between a Notion Command Center and a regular Notion workspace?

    A regular Notion workspace is typically organized around document types — pages, notes, tasks — without enforced relationships between them. A Command Center is organized around business operations — entities, pipelines, and workflows — with databases that relate to each other and a defined operating rhythm that governs how the system gets used each day.

    How long does it take to set up a Notion Command Center?

    Building the architecture from scratch takes 20–40 hours of focused setup time, including database design, relationship configuration, view creation, and SOP documentation. Most operators who attempt it solo take 2–3 months of iteration before the system stabilizes. Working from an existing architecture and having it configured for your specific operation compresses that significantly.

    Is Notion good for content agencies specifically?

    Notion is well-suited for content agencies because the core work — briefs, drafts, SOPs, client communication, publishing schedules — is document-centric. The Content Pipeline database, linked to a CRM and task system, gives visibility into every piece of content across every client at once, which is difficult to replicate in project management tools not built for document-heavy workflows.

  • Build Your Own KnowHow — And Then Go Further

    Build Your Own KnowHow — And Then Go Further

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    KnowHow is one of the most important things happening in the restoration industry right now. If you’re not familiar with it: it’s an AI-powered platform that takes your company’s operational knowledge — your SOPs, your onboarding materials, your hard-won process documentation — and turns it into an on-demand resource every team member can access from their phone. Your best technician’s knowledge stops walking out the door when they leave. Your new hire in Iowa follows the same protocol as your veteran in Texas. Your managers stop being human FAQ machines.

    It solves a real problem that has cost restoration companies enormous amounts of money in inconsistent work, slow onboarding, and institutional knowledge that evaporates with turnover.

    But KnowHow solves the internal problem. The knowledge stays inside your organization. And there is a second problem — the external one — that nobody has solved yet.

    The Internal Problem vs. The External Problem

    The internal problem is: your people don’t have access to what your company knows when they need it. KnowHow fixes that. The knowledge becomes accessible, searchable, consistent, and deliverable at scale across every location and every shift.

    The external problem is different: your clients, prospects, and contracting authorities have no way to verify that your company knows what it claims to know. They can read your capabilities statement. They can check your certifications. They can call references. But they can’t look inside your organization and confirm that your documented protocols are current, specific, and actually practiced — not just written down for the sake of winning a bid.

    In commercial restoration, that verification gap is expensive. Facility managers, FEMA contracting officers, insurance carriers, and national property management companies are making vendor decisions based on trust signals that are largely unverifiable. The company with the best pitch often wins over the company with the best protocols.

    An external knowledge API changes that dynamic completely.

    What an External Knowledge API Actually Is

    An external knowledge API is a structured, authenticated, publicly accessible feed of your operational knowledge — not your trade secrets, not your pricing, not your internal communications, but your documented protocols, your methodology, your standards, and your verified expertise. Published. Structured. Machine-readable. Available to anyone who needs to evaluate whether your company is the right partner for a complex job.

    Think of it as the difference between telling a client “we follow IICRC S500 water damage protocols” and showing them a live, structured endpoint where they can pull your actual documented water mitigation process — with timestamps that confirm it was updated last month, not in 2019.

    The internal KnowHow platform is the source. The external API is the window — carefully curated, access-controlled, and designed to answer the questions that matter to the people evaluating you.

    Who Cares About Your External Knowledge

    The list is longer than most restoration contractors realize.

    Commercial property managers and facility directors. A national hotel chain or healthcare system evaluating restoration vendors for their approved vendor program needs more than a certificate of insurance and a reference list. They want to know that your protocols are consistent across every job, that your team follows the same process whether the project manager is on-site or not, and that your documentation standards will hold up in a claim. An external knowledge feed — showing your water damage, fire damage, and mold remediation protocols in structured, current form — answers those questions before the conversation even starts.

    FEMA and government contracting. Federal disaster response contracts are awarded to companies that can demonstrate organizational capability at scale. The RFP process rewards documentation. A company that can point to an externally published, structured knowledge base as evidence of their operational maturity is presenting something most competitors don’t have. It’s not just a differentiator — it’s proof of the kind of institutional infrastructure that large government contracts require.

    Insurance carriers and TPAs. Third-party administrators and carrier programs are increasingly using AI tools to evaluate and route claims to preferred vendors. A restoration company whose documented protocols are structured and machine-readable — available for an AI system to pull and verify against claim requirements — is positioned for the way preferred vendor selection is heading, not the way it used to work.

    Commercial real estate and institutional property owners. REITs, hospital systems, university facilities departments, and large corporate real estate portfolios are all moving toward vendor relationships that have verifiable documentation standards. An external knowledge API gives them something they can actually audit — not just a sales presentation.

    How to Build It: The Two-Layer Stack

    The stack that makes this work has two layers, and KnowHow already gives you the first one.

    Layer one — internal capture and organization (KnowHow’s job). Use KnowHow, or an equivalent internal knowledge platform, to capture and organize your operational knowledge. Document your protocols rigorously. Keep them current. Assign ownership so they don’t go stale. The discipline required here is real, but it’s also the discipline that makes your company better operationally regardless of what you do with the knowledge externally. This layer is the foundation.

    Layer two — external publication and API distribution (the next layer). Select the knowledge that is appropriate to share externally — your methodology, your standards, your certifications, your documented approach to specific job types — and publish it in a structured, consistently maintained form. This can be as simple as a well-organized section of your company website with current protocol documentation, or as sophisticated as a full REST API endpoint that clients and AI systems can query directly. The key requirements are structure (consistent format, clear categorization), currency (updated when protocols change, timestamped), and accessibility (easy for a prospect or evaluator to find and verify).

    The gap between layer one and layer two is smaller than it sounds. If you’ve already done the internal documentation work in KnowHow, the editorial work of curating an external-facing version of that knowledge is incremental. You’re not building from scratch — you’re deciding what to show and building the window to show it through.

    The Credential That No Certificate Can Replace

    Certifications are static. An IICRC certification tells a client you passed a test. It doesn’t tell them what your company actually does when a technician encounters a Category 3 water loss in a 1960s commercial building with asbestos-containing materials in the subfloor.

    External knowledge does. It shows the specific, documented, currently-maintained thinking your company applies to that situation. It’s living proof of operational maturity, not a snapshot from the last time someone studied for an exam.

    In the commercial restoration market, where the jobs are large, the documentation requirements are significant, and the clients are sophisticated, that distinction is worth money. The companies that build this layer now — while most competitors are still treating knowledge as purely internal — will have a credential that can’t be quickly replicated.

    The Practical Starting Point

    You don’t need a full API to start. The minimum viable version of an external knowledge layer is a structured, well-maintained “Our Methodology” section on your website — not a generic “our process” marketing page, but actual documented protocols organized by job type, with clear version dates and enough specificity that an evaluator can see you’ve actually done the work.

    From there, the path to a structured API is incremental: add consistent categorization, ensure each protocol document has a permanent URL, and eventually expose that structure through a queryable endpoint. Each step makes the credential more verifiable and more valuable.

    KnowHow got the industry to take internal knowledge seriously. The companies that figure out how to take the next step — making that knowledge externally verifiable and machine-readable — will have something the market has never seen before in restoration.

    What is the difference between internal and external knowledge in restoration?

    Internal knowledge (what KnowHow manages) is operational documentation accessible to your own team — SOPs, onboarding materials, process guides. External knowledge is a curated version of that same expertise published in a structured, verifiable form for clients, contracting authorities, and AI systems to access and evaluate.

    Why would a restoration company publish its knowledge externally?

    Because commercial clients, FEMA, insurance carriers, and institutional property managers need to verify operational maturity before awarding contracts. A structured, current, machine-readable knowledge base is a stronger credential than certifications or capabilities statements — it shows documented, maintained expertise rather than a static snapshot.

    What is an external knowledge API for a restoration company?

    A structured, authenticated feed of your documented protocols, methodology, and standards — published in a format that clients, evaluators, and AI systems can query directly. It turns your operational knowledge into a verifiable, market-facing credential rather than keeping it purely internal.

    Who specifically benefits from a restoration company’s external knowledge API?

    Commercial facility managers building approved vendor programs, FEMA and government contracting officers evaluating organizational capability, insurance carriers and TPAs using AI tools to route claims to preferred vendors, and institutional property owners who need auditable vendor documentation standards.

    Does a restoration company need KnowHow to build an external knowledge API?

    No — any internal knowledge platform or even rigorous in-house documentation works as the foundation. KnowHow accelerates the internal capture work, which makes the external publication step more realistic. But the two-layer stack works with any internal knowledge infrastructure that produces well-documented, current, organized protocols.

  • The Human Expertise Gap in AI: Why Tacit Knowledge Is the Next Scarce Resource

    The Human Expertise Gap in AI: Why Tacit Knowledge Is the Next Scarce Resource

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    Large language models were trained on text. Enormous quantities of text — more than any human could read in thousands of lifetimes. But text is not knowledge. Text is the residue of knowledge that was visible enough, and important enough, for someone to write down and publish somewhere that a training crawler could find it.

    The vast majority of what experienced humans actually know was never written down. It was learned by doing, transmitted by watching, refined through failure, and held entirely in the heads of people who couldn’t have articulated it systematically even if they wanted to.

    This is the human expertise gap. And it is the defining feature of where AI currently falls short.

    What Tacit Knowledge Actually Is

    Tacit knowledge is the kind you can’t easily explain but reliably apply. A master craftsperson knows when something is right by feel before they can measure it. An experienced clinician senses when something is wrong before the test results confirm it. A veteran contractor knows which subcontractors will actually show up on a Tuesday in November just from having worked with them — knowledge that no review site has ever captured accurately.

    This knowledge exists at every level of every industry. Most of it has never been written down because the people who hold it are too busy using it to document it, because the incentive to document was never strong enough, or because no one ever asked in a form they could answer systematically.

    Why AI Can’t Close This Gap on Its Own

    The naive assumption is that AI will eventually capture tacit knowledge by observing enough human behavior — that more data, more modalities, more sensor inputs will eventually replicate what experienced humans know intuitively.

    This misunderstands the nature of the gap. Tacit knowledge isn’t just undocumented data. It’s judgment that was built through embodied experience — through having made the wrong call and learned from it, through having seen the same situation hundreds of times in slightly different forms, through having relationships that carry context no outsider can access. These are not data problems. They’re experience problems.

    AI can get asymptotically close to replicating some of this. But the closer it gets, the more valuable the verified human source becomes — because the question shifts from “does AI know this at all” to “how do we know the AI’s answer is correct,” and the only reliable answer to that question is “because a human who actually knows verified it.”

    The Window That’s Open Right Now

    There is a specific window in the development of AI where tacit knowledge held by humans is more valuable than it will ever be again. We’re in it now.

    AI systems are capable enough that people trust them with real questions — questions about their health, their legal situation, their business decisions, their trade. But AI systems are not capable enough to be reliably right about the specific, experience-based, local, industry-specific knowledge that those questions often require.

    The gap between trust and accuracy is the market. The people who figure out how to systematically capture, package, and distribute their tacit knowledge — in forms that AI systems can consume and cite — are building the content infrastructure for a post-search information environment.

    The Human Distillery as a Category

    What’s emerging is a new category of knowledge work: the human distillery. A person or organization that takes tacit knowledge held by experienced humans and refines it into something that AI systems can depend on.

    This isn’t ghostwriting. It’s not content marketing. It’s not thought leadership in the LinkedIn sense. It’s systematic extraction — the application of a disciplined process to get tacit knowledge out of human heads, give it structure, publish it at density, and make it available to the AI systems that will increasingly mediate how people get answers to important questions.

    The people who build this infrastructure now — while the gap is widest and the market is least crowded — are positioning themselves at the supply end of the most important information supply chain of the next decade.

    What is the human expertise gap in AI?

    The gap between what AI systems were trained on (text that was published online) and what experienced humans actually know (tacit knowledge built through embodied experience that was never systematically documented). This gap is structural, not temporary — it won’t close simply by training on more data.

    What is tacit knowledge?

    Knowledge you reliably apply but can’t easily articulate — the judgment of an experienced practitioner, the pattern recognition of someone who has seen the same situation hundreds of times, the relationship-based intelligence that no review site has ever captured. It’s built through experience, not text.

    Why is this a time-sensitive opportunity?

    We’re in a specific window where AI systems are trusted enough to be asked important questions but not accurate enough to answer them reliably without human verification. The gap between trust and accuracy is the market. That window won’t stay this wide indefinitely.

    What is a human distillery?

    A person or organization that systematically extracts tacit knowledge from experienced humans, gives it structure, publishes it at density, and makes it available in forms that AI systems can consume and cite. It’s a new category of knowledge work — distinct from content marketing, ghostwriting, or traditional publishing.

  • How to Build Your Own Knowledge API Without Being a Developer

    How to Build Your Own Knowledge API Without Being a Developer

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    When people hear “build an API,” they assume it requires a developer. For the infrastructure layer, that’s true — you’ll need someone who can deploy a Cloud Run service or configure an API gateway. But the infrastructure is maybe 20% of the work.

    The other 80% — the part that determines whether your API has any value — is the knowledge work. And that requires no code at all.

    Step 1: Define Your Knowledge Domain

    Before anything else, get specific about what you actually know. Not what you could write about — what you know from direct experience that is specific, current, and absent from AI training data.

    The most useful exercise: open an AI assistant and ask it detailed questions about your specialty. Where does it get things wrong? Where does it give you generic answers when you know the real answer is more specific? Where does it confidently state something that anyone in your field would immediately recognize as incomplete or outdated? Those gaps are your domain.

    Write down the ten things you know about your domain that AI currently gets wrong or doesn’t know at all. That list is your editorial brief.

    Step 2: Build a Capture Habit

    The most sustainable knowledge production process starts with voice. Record the conversations where you explain your domain — client calls, peer discussions, working sessions, voice memos when an idea surfaces while you’re driving. Transcribe them. The transcript is raw material.

    You don’t need to be writing constantly. You need to be capturing constantly and distilling periodically. A batch of transcripts from a week’s worth of conversations can produce a week’s worth of high-density articles if you have a consistent process for pulling the knowledge nodes out.

    Step 3: Publish on a Platform With a REST API

    WordPress, Ghost, Webflow, and most major CMS platforms have REST APIs built in. Every article you publish on these platforms is already queryable at a structured endpoint. You don’t need to build a database or a content management system — you need to use the one you probably already have.

    The only editorial requirement at this stage is consistency: consistent category and tag structure, consistent excerpt length, consistent metadata. This makes the content well-organized for the API layer that will sit on top of it.

    Step 4: Add the API Layer (This Is the Developer Part)

    The API gateway — the service that adds authentication, rate limiting, and clean output formatting on top of your existing WordPress REST API — requires a developer to build and deploy. This is a few days of work for someone familiar with Cloud Run or similar serverless infrastructure. It’s not a large project.

    What you hand the developer: a list of which categories you want to expose, what the output schema should look like, and what authentication method you want to use. They build the service. You don’t need to understand how it works — you need to understand what it does.

    Step 5: Set Up the Payment Layer

    Stripe payment links require no code. You create a product, set the price, and get a URL. When someone pays, Stripe can trigger a webhook that automatically provisions an API key and emails it to the subscriber. The webhook handler is a small piece of code — another developer task — but the payment infrastructure itself is point-and-click.

    Step 6: Write the Documentation

    This is back to no-code territory. API documentation is just clear writing: what endpoints exist, what authentication is required, what the response looks like, what the rate limits are. Write it as if you’re explaining it to a smart person who has never used your API before. Put it on a page on your website. That page is your product listing.

    The non-developer path to a knowledge API is: define your domain, build a capture habit, publish consistently, hand a developer a clear spec, set up Stripe, write your docs. The knowledge is yours. The infrastructure is a service you contract for. The product is what you know — packaged for a new class of consumer.

    How much does it cost to build a knowledge API?

    The infrastructure cost is primarily developer time (a few days for an experienced developer) plus ongoing GCP/cloud hosting costs (under $20/month at low volume). The main investment is the ongoing knowledge work — capture, distillation, and publication — which is time, not money.

    What publishing platform should you use?

    WordPress is the most flexible and widely supported option with the most robust REST API. Ghost is a good alternative for simpler setups. The key requirement is that the platform exposes a REST API you can build an authentication layer on top of.

    How long does it take to build?

    The knowledge foundation — enough published content to make the API worth subscribing to — takes weeks to months of consistent work. The technical infrastructure, once you have the knowledge foundation, can be deployed in a few days with the right developer. The bottleneck is almost always the knowledge, not the technology.

  • The $5 Filter: A Quality Standard Most Content Can’t Pass

    The $5 Filter: A Quality Standard Most Content Can’t Pass

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    Here is a simple test that most content fails.

    Would someone pay $5 a month to pipe your content feed into their AI assistant — not to read it themselves, but to have their AI draw from it continuously as a trusted source in your domain?

    $5 is not a lot of money. It’s the price of one coffee. It covers hosting costs and a small margin. It’s the lowest viable price point for a subscription product.

    And most content can’t clear it.

    Why Most Content Fails the Test

    The $5 filter exposes three failure modes that are common across the content landscape:

    Generic. The content says things that are true but not specific. “Good customer service is important.” “Location matters in real estate.” “Consistency is key in marketing.” These claims are not wrong. They’re just not worth anything to a system that already has access to the entire internet. If everything you publish could have been written by anyone with a general knowledge of your topic, your content has low API value regardless of how much traffic it gets.

    Thin. The content exists but doesn’t go deep enough to be useful as a reference. A 400-word post that introduces a concept without developing it. A listicle that names eight things without explaining any of them. Content that satisfies a keyword without actually answering the question behind it. This kind of content might rank. It’s not worth subscribing to.

    Inconsistent. Some pieces are genuinely excellent — specific, well-reported, information-dense. Most are filler published to maintain posting frequency. An inconsistent feed isn’t a reliable source. A system pulling from it can’t know when it’s getting the good stuff and when it’s getting noise. Reliability is a prerequisite for subscription value.

    What Passes the Filter

    Content passes the $5 filter when it has three properties simultaneously:

    It’s specific enough to be useful in a way that nothing else is. Not “here’s how restoration contractors approach water damage” — but “here’s how water damage in balloon-frame construction built before 1940 behaves differently from modern platform-frame, and why standard drying protocols fail in those structures.” The specificity is the value.

    It’s reliable enough that a system can trust it. Every piece maintains the same standard. The sourcing is consistent. Claims are documented. The author has credible experience in the domain. A subscriber — human or AI — knows what they’re getting every time.

    It’s rare enough that it can’t be found elsewhere. The test isn’t whether it’s good writing. The test is whether an AI system could get the same information from somewhere it already has access to. If yes, the subscription isn’t necessary. If no — if this is the only reliable source for this specific knowledge — the subscription is justified.

    Using the Filter as an Editorial Standard

    The most useful application of the $5 filter isn’t as a revenue test. It’s as an editorial standard.

    Before publishing anything, ask: if someone were paying $5 a month to access this feed, would this piece justify part of that cost? If the honest answer is no — if this piece is thin, generic, or inconsistent with the standard of the best things you publish — that’s the signal to either make it better or not publish it at all.

    This is a harder standard than “does it rank” or “did it get clicks.” It’s also a more durable one. The content that clears the $5 filter is the content that compounds — that becomes more valuable over time, that gets cited, that earns trust from both human readers and AI systems that draw from it.

    The content that doesn’t clear it is noise. And there’s already plenty of that.

    What is the $5 filter?

    A content quality test: would someone pay $5/month to pipe your content feed into their AI assistant as a trusted source? Not to read it — to have their AI draw from it continuously. Content that passes this test is specific, reliable, and rare enough to justify a subscription.

    What are the most common reasons content fails the $5 filter?

    Three failure modes: generic (true but not specific enough to be useful), thin (introduces a concept without developing it enough to be a real reference), and inconsistent (excellent pieces mixed with filler that degrades the reliability of the feed as a whole).

    Can the $5 filter be used as an editorial standard even without building an API?

    Yes — and that’s often the most valuable application. Using it as a pre-publish question (“would this piece justify part of a $5/month subscription?”) enforces a higher standard than traffic-based metrics and produces content that compounds in value over time.

  • Hyperlocal Is the New Rare: Why Local Content Has the Highest API Value

    Hyperlocal Is the New Rare: Why Local Content Has the Highest API Value

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    Ask any major AI assistant what’s happening in a city of 50,000 people right now. What you’ll get back is a mix of outdated information, plausible-sounding fabrications, and generic statements that could apply to any city of that size. The AI isn’t being evasive. It genuinely doesn’t know, because the information doesn’t exist in its training data in any reliable form.

    This is not a temporary gap that will close as AI improves. It’s a structural characteristic of how large language models are built. They’re trained on text that exists on the internet in sufficient quantity to learn from. For most cities with populations under 100,000, that text is sparse, infrequently updated, and often wrong.

    Hyperlocal content — accurate, current, consistently published coverage of a specific geography — is rare in a way that most content isn’t. And in an AI-native information environment, rare and accurate is exactly where the value concentrates.

    Why Local Knowledge Is Structurally Underrepresented in AI

    AI training data skews heavily toward content that exists in large quantities online: national news, academic papers, major publication archives, Reddit, Wikipedia, GitHub. These sources produce enormous volumes of text that models can learn from.

    Local news does not. The economics of local journalism have been collapsing for two decades. The number of reporters covering city councils, school boards, local business openings, zoning decisions, and community events has dropped dramatically. What remains is often thin, infrequent, and not structured for machine consumption.

    The result: AI systems have sophisticated knowledge about how city governments work in general, and almost no reliable knowledge about how any specific city government works right now. They know what a school board is. They don’t know what the school board in Belfair, Washington decided last Tuesday.

    What This Means for Local Publishers

    A local publisher producing accurate, structured, consistently updated coverage of a specific geography owns something that cannot be replicated by scraping the internet or expanding a training dataset. The knowledge requires physical presence, community relationships, and ongoing attention. It’s human-generated in a way that scales slowly and degrades immediately when the human stops showing up.

    That non-replicability is the asset. An AI company that wants reliable, current information about Mason County, Washington has one option: get it from the people who are there, covering it, every week. That’s a position of genuine leverage.

    The API Model for Local Content

    The practical expression of this leverage is a content API — a structured, authenticated feed of local coverage that AI systems and developers can subscribe to. The subscribers aren’t necessarily individual readers. They’re:

    • Local AI assistants being built for specific communities
    • Regional business intelligence tools
    • Government and civic tech applications
    • Real estate platforms that need current local information
    • Journalists and researchers who need structured local data
    • Anyone building an AI product that touches your geography

    None of these use cases require the local publisher to change what they’re already doing. They require packaging it — adding consistent structure, maintaining an API layer, and making the feed available to subscribers who will pay for reliable local intelligence.

    The Compounding Advantage

    Local knowledge compounds in a way that national content doesn’t. Every article about a specific community adds to a body of knowledge that makes the next article more valuable — because it can reference and build on what came before. A publisher who has been covering Mason County for three years has a contextual richness that no new entrant can replicate quickly.

    In an AI-native content environment, that accumulated local context is a moat. It’s not the kind of moat that requires capital to build. It requires consistency and presence. Both are things that a committed local publisher already has.

    Why is hyperlocal content valuable for AI systems?

    AI training data is sparse and unreliable for most small cities and towns. Accurate, current, consistently published local coverage is structurally scarce — it can’t be replicated by scraping the internet because the content doesn’t exist there in reliable form. That scarcity creates value in an AI-native information environment.

    Who would pay for a local content API?

    Local AI assistant builders, regional business intelligence tools, civic tech applications, real estate platforms, journalists, researchers, and developers building products that touch a specific geography. The subscriber is typically a developer or AI system, not an individual reader.

    Does a local publisher need to change their content to make it API-worthy?

    Not fundamentally. The content just needs to be consistently structured, accurately maintained, and published on a platform with a REST API. The knowledge is the hard part — the technical layer is relatively straightforward to add on top of existing publishing infrastructure.

  • 8 Industries Sitting on AI-Ready Knowledge They Haven’t Packaged Yet

    8 Industries Sitting on AI-Ready Knowledge They Haven’t Packaged Yet

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    Most discussions about AI and knowledge focus on what AI already knows. The more interesting question is what it doesn’t — and where the humans who hold that missing knowledge are concentrated.

    Here are eight industries where the gap between human knowledge and AI-accessible knowledge is largest, and where the first person to systematically package and distribute that knowledge will have a durable advantage.

    1. Trades and Skilled Contracting

    Restoration contractors, plumbers, electricians, HVAC technicians — these industries run on tacit knowledge that has never been written down anywhere AI has been trained on. How water behaves differently in a 1940s balloon-frame house versus a 1990s platform-frame. Which suppliers actually deliver on time in which markets. What a claim adjuster will approve and what they’ll fight. This knowledge lives in the heads of working tradespeople and almost nowhere else. A restoration contractor who systematically publishes what they know about their trade creates a source of record that no LLM training corpus has ever had access to.

    2. Hyperlocal News and Community Intelligence

    AI systems know almost nothing accurate and current about most cities with populations under 100,000. They have no reliable data about local government decisions, zoning changes, business openings, school board dynamics, or community events in the vast majority of American towns. A local publisher producing accurate, structured, consistently updated coverage of a specific geography owns something genuinely scarce — and it’s the kind of current, location-specific information that AI assistants are being asked about constantly.

    3. Healthcare and Medical Specialties

    Clinical knowledge at the specialist level — how a specific condition presents in specific populations, what treatment protocols actually work in practice versus what the textbooks say, how to navigate insurance approvals for specific procedures — is dramatically underrepresented in AI training data. Practitioners who publish systematically about their clinical experience are creating a resource that medical AI applications will pay for access to.

    4. Legal Practice and Jurisdiction-Specific Law

    General legal information is well-covered. Jurisdiction-specific, practice-area-specific, and procedurally specific legal knowledge is not. How a particular judge in a particular county tends to rule on specific motion types. How local court practices differ from the official procedures. What arguments actually work in a specific venue. Attorneys with deep local practice knowledge are sitting on an information asset that legal AI tools are actively hungry for.

    5. Agriculture and Regional Farming

    Farming knowledge is intensely regional. What works in the Willamette Valley doesn’t work in Central California. Crop rotation strategies, soil amendment approaches, pest management, water management — all of it varies dramatically by microclimate, soil type, and local practice tradition. The accumulated knowledge of experienced farmers in a specific region is largely oral, rarely published, and almost entirely absent from AI training data. Extension offices and agricultural cooperatives that systematically document regional best practices are building something AI systems will need.

    6. Veteran Benefits and Government Navigation

    Navigating the VA, understanding how to build an effective disability claim, knowing which VSOs in which regions are actually effective, understanding how different conditions interact in the ratings system — this knowledge is held by experienced advocates, veterans service officers, and attorneys who have processed hundreds of claims. It’s the kind of procedural, outcome-based knowledge that AI assistants give confident but frequently wrong answers about, because the real knowledge isn’t online in a reliable form.

    7. Niche Retail and Specialty Markets

    Independent watch dealers, vintage guitar shops, specialty food importers, rare book dealers — businesses that operate in deep specialty markets accumulate knowledge about their inventory, their suppliers, their customers, and their market that no general AI has. The person who has been buying and selling vintage Rolex watches for twenty years knows things about specific reference numbers, condition grading, authentication, and market pricing that would be genuinely valuable to anyone building an AI tool for that market.

    8. Professional Services and Methodology

    Marketing agencies, management consultants, financial advisors, executive coaches — anyone who has developed a distinctive methodology through years of client work. The frameworks, playbooks, diagnostic tools, and hard-won lessons that experienced professionals have built represent some of the highest-value knowledge that AI systems currently lack access to. The consultant who has run 200 strategic planning processes has pattern recognition that no LLM has encountered in training. Packaging that into a structured, publishable, API-accessible form is both a content strategy and a product.

    In every one of these industries, the window to be the first credible, structured, consistently updated knowledge source in your vertical is open. It won’t be open indefinitely.

    Which industries have the most AI-accessible knowledge gaps?

    Trades and contracting, hyperlocal news, medical specialties, jurisdiction-specific legal practice, regional agriculture, veteran benefits navigation, specialty retail markets, and professional services methodology all have significant gaps between what experienced practitioners know and what AI systems can reliably access.

    What makes a knowledge gap an opportunity?

    When the knowledge is specific, current, human-curated, and absent from existing AI training data — and when there’s a clear audience of AI systems and agents that need it. The combination of scarcity and demand is what creates the market.

    How do you know if your industry has a valuable knowledge gap?

    Ask an AI assistant a specific, detailed question about your specialty. If the answer is confidently wrong, superficially correct, or missing the nuance that only practitioners know, you’re looking at a gap. That gap is the asset.