Category: Content Strategy

Content is not blog posts — it is infrastructure. Every article, landing page, and resource you publish either builds authority or wastes bandwidth. We cover the architecture behind content that ranks, converts, and compounds: hub-and-spoke models, pillar pages, content velocity, and the editorial strategies that turn a restoration company website into the most authoritative source in their market.

Content Strategy covers editorial planning, hub-and-spoke content architecture, pillar page development, content velocity frameworks, topical authority mapping, keyword clustering, content gap analysis, and publishing workflows designed for restoration and commercial services companies.

  • The Knowledge Token Economy: Earning API Access Through What You Know

    The Distillery
    — Brew № — · Distillery

    What if access to an API wasn’t purchased — it was earned? Not through a subscription, not through a credit card, but through the value of what you know.

    That is the premise of the knowledge token economy: a system where people fill out forms, answer questionnaires, and complete structured interviews, and the depth and novelty of what they contribute determines how much API access they receive in return. Knowledge in, capability out.

    How the Contribution Loop Works

    The mechanic is straightforward. A person enters the system through a form — static, dynamic, or choose-your-own-adventure style. Their responses are ingested, scored against the existing knowledge base, and a token grant is issued proportional to the contribution’s value. Those tokens translate directly into API calls, rate limit increases, or access to higher-capability endpoints.

    The scoring event is the critical moment. It is not the act of submitting answers that generates tokens — it is the delta. The gap between what the system knew before the submission and what it knows after. A generic answer to a common question scores near zero. A 30-year restoration adjuster explaining exactly how Xactimate line items get disputed in hurricane-affected markets — that scores high. The system gets smarter; the contributor gets access.

    Form Types and Knowledge Depth

    Not all forms extract knowledge equally. The format determines the depth ceiling.

    Static forms establish baseline data: industry, credentials, years of experience, geography. They orient the system but rarely produce high-scoring contributions on their own. Their value is in establishing contributor identity and seeding the dynamic layer.

    Dynamic forms branch based on answers. When a contributor demonstrates domain knowledge in one area, the form follows them deeper into that area rather than moving on to the next generic question. A plumber who mentions slab leak detection gets routed into a sequence that extracts everything they know about that specific problem. Someone without that knowledge gets routed elsewhere. The form adapts to the contributor’s actual knowledge surface.

    Choose-your-own-adventure forms give contributors agency over which knowledge threads they follow. This produces the highest-quality contributions because people naturally move toward the areas where they have the most to say. It also produces the most honest signal — a contributor who keeps choosing the shallow path is telling you something about the limits of their expertise.

    The Grading Model

    Three variables determine a contribution’s score:

    Novelty. Does this add something the knowledge base does not already contain? A response that confirms existing knowledge scores low. A response that contradicts, nuances, or extends existing knowledge scores high. The system is not looking for agreement — it is looking for new signal.

    Specificity. Vague answers have low information density. Specific answers — with named processes, real numbers, identified edge cases, and concrete examples — have high information density. “We usually do it within a few days” scores low. “Florida public adjusters typically file the supplemental within 14 days of the initial estimate to stay inside the appraisal demand window” scores high.

    Density. How much usable signal per word? Long answers are not automatically high-scoring. A contributor who gives a two-sentence answer that contains a genuinely novel, specific insight outscores someone who writes three paragraphs of generalities. The system is measuring information content, not volume.

    Token Economics

    Tokens can be structured in multiple ways depending on what the API operator wants to incentivize.

    The simplest model maps tokens directly to API calls: one token, one call. A contributor who scores in the top tier earns enough tokens for meaningful API usage. A contributor who submits low-value responses earns modest access — enough to see the system work, not enough to build on it seriously.

    A tiered model unlocks capability rather than just volume. Low-score contributors get basic endpoint access. Mid-score contributors get higher rate limits and richer data. Top-score contributors get access to premium endpoints, bulk query capabilities, or priority processing. This creates a self-sorting system where domain experts naturally end up with the most powerful access.

    A reputation model layers on top of either approach. Each contributor builds a score over time. Early submissions carry full novelty weight. As a contributor’s personal knowledge surface gets exhausted — as the system learns everything they know about their specialty — their marginal contribution value decreases. This prevents gaming through repetition and rewards contributors who keep bringing genuinely new knowledge to the system.

    The Anti-Gaming Layer

    Any token economy will be gamed. People will submit the same high-scoring answer repeatedly, pattern-match to questions they have seen before, or collaborate to flood the system with synthetic responses. The anti-gaming architecture needs to be built in from the start, not retrofitted after the first abuse case.

    Novelty detection penalizes answers that match previous submissions semantically, not just literally. A reworded version of a prior high-scoring answer should score significantly lower. Contributor fingerprinting tracks the knowledge surface each individual has already covered and reduces scoring weight for re-covered ground. Anomaly detection flags contributors whose scoring patterns are statistically improbable — consistently perfect scores across unrelated domains are a signal worth investigating.

    The Strategic Frame

    What makes this model different from a survey with a gift card is the compounding dynamic. Each contribution makes the knowledge base more valuable, which makes the API more valuable, which increases the value of token access, which increases the incentive to contribute high-quality knowledge. The system gets smarter and more valuable over time through the contributions of the people who use it.

    The contributors who understand their own knowledge — who can articulate what they know specifically and precisely — end up with the most API access. The system rewards epistemic clarity. That is not a design quirk. It is the point.

  • The Knowledge Exchange Economy: What Businesses Can Trade for Expert Insights

    The Distillery
    — Brew № — · Distillery

    Every business has a waiting room problem. Customers sit idle, phones in hand, burning time that nobody captures. The knowledge exchange model flips that equation: offer something tangible — a free oil change, a coffee, a service credit — in return for a structured voice interview with an AI. The conversation gets transcribed, processed, and converted into industry intelligence that compounds over time.

    This is not a survey. It is a transaction — one where both sides walk away with something real.

    The Businesses That Make This Work

    Not every venue is equal. The model performs best where three conditions align: captive time, domain knowledge, and a credible exchange offer.

    Automotive Dealerships and Service Centers

    A customer waiting 90 minutes for a service appointment on a $40,000 vehicle is one of the highest-value interview subjects available. The demographic skews toward homeowners, business operators, and tradespeople — people with active relationships with contractors, insurance companies, and service vendors. A free oil change ($40–$60 value) is a natural, frictionless exchange that fits the existing service relationship.

    The knowledge collected here is high-signal: home maintenance decisions, contractor vetting behavior, brand loyalty drivers, insurance claim experience. And because automotive service is habitual — the same customer returns every 3–6 months — topic rotation allows the same individual to be interviewed on entirely different subjects across visits without fatigue.

    Specialty Trade and Supply Shops

    A person browsing a plumbing supply house has already self-selected as a domain expert. You are not screening for knowledge — it arrives pre-filtered. The same applies to HVAC supply stores, electrical wholesalers, restoration equipment rental shops, and flooring distributors. The knowledge depth available in these environments is exceptional, and the foot traffic, while lower than consumer retail, is densely qualified.

    A discount on next purchase, a free product sample, or a referral credit aligns with the transactional context better than a gift card. The goal is to make the offer feel like a natural extension of the existing vendor relationship, not a detour from it.

    Contractor and Home Service Appointment Queues

    When a restoration contractor, HVAC technician, or roofing company sends a team out for an estimate, there is often a 15–30 minute window before the conversation starts. That window is currently dead time. A tablet-based voice interview with a homeowner — optional, in exchange for a service discount — turns dead time into structured knowledge.

    For restoration networks, this is the highest-priority deployment target. The homeowner knowledge collected here — property condition, vendor relationships, insurance claim navigation, decision-making around major repairs — directly feeds contractor content networks that produce compounding SEO value.

    Coffee Shops and Cafés

    The latte exchange is the cheapest attention buy available. A $6 drink buys 5–8 minutes from a broad demographic cross-section. The problem is variability. Without venue-specific targeting, knowledge quality is unpredictable. A café near a hospital skews toward healthcare workers. One near a job site skews toward tradespeople. Location selection is the quality filter. This model works best as a campaign sprint, not a permanent fixture.

    Waiting Rooms: Medical, Legal, Insurance, Government

    Captive time is abundant in institutional waiting rooms. The problem is emotional state. Someone waiting for a medical appointment or legal consultation is often stressed and guarded. This context produces experiential knowledge — how people navigate complex systems — but it is poorly suited to deep technical intelligence gathering. The exchange offer matters more here than anywhere else.

    The Diminishing Returns Problem

    Every knowledge exchange model eventually hits a ceiling. Three variables determine the return curve:

    Time cost versus knowledge depth. A 3-minute coffee shop interview produces surface awareness. A 15-minute dealership interview produces actionable depth. The exchange value must scale proportionally. The ask and the offer must be in the same weight class.

    Knowledge specificity versus content utility. General consumer sentiment is cheap to collect and cheap to use. Vertical expertise — how a 30-year HVAC technician thinks about refrigerant transitions, or how a jewelry appraiser evaluates estate pieces — is rare and highly monetizable. The exchange reward should reflect the scarcity of the knowledge, not just the time spent.

    Repeat exposure decay. The same person in the same context produces diminishing returns after one or two interviews. Topic rotation is the primary lever for extending the value of a returning interviewee. A homeowner interviewed about contractor relationships in spring can be interviewed about insurance claim history in fall. The person is the same; the knowledge surface is entirely different.

    The Autonomous Pipeline

    For the model to scale beyond a manual operation, the interview-to-content pipeline must run without human intervention at each step. A voice AI handles the interview on a tablet mounted at the venue, following a structured question protocol designed around the specific knowledge domain of that venue type. Transcription happens in real time. The transcript is routed to Claude, which extracts structured knowledge, formats it as a knowledge node, and pushes it to a content pipeline. High-value nodes get flagged for article production. Standard nodes are logged for future use.

    Consent is captured at interview start — a single tap-to-accept screen that clearly states the knowledge is being collected for content purposes. This covers legal exposure without creating friction that kills compliance rates.

    The Strategic Frame

    What makes this different from a survey or focus group is the output format. Traditional knowledge collection produces reports that sit on drives. This model produces structured, AI-ready knowledge nodes that slot directly into a content production pipeline. Every conversation becomes an asset. Every asset compounds.

    The goal is not to conduct interviews. The goal is to build a system where knowledge flows continuously from the people who have it to the platforms that need it — and everyone involved gets something real in return.

  • The Distillery: Hand-Crafted Batches of Distilled Knowledge, Available as API Feeds

    The Distillery: Hand-Crafted Batches of Distilled Knowledge, Available as API Feeds

    The Distillery — Brew № — · Distillery

    Most content on the internet is noise. It exists to rank, to fill space, to signal presence. It is not dense enough to be useful to the people who actually need to know the thing it claims to cover. And it is certainly not dense enough to be valuable as a feed that an AI system pulls from to answer real questions.

    The Distillery is different. It is a named section of Tygart Media where we produce small batches of genuinely high-density knowledge on specific topics — researched from real search demand data, written to a standard where every sentence earns its place, and published in structured form that both humans and AI systems can use.

    Each batch is available as a category API feed. Subscribers get authenticated access to the full batch as structured JSON — updated as new knowledge is added, versioned so auditors and AI systems can cite the exact vintage they’re drawing from.

    What a Batch Is

    A batch is a curated body of knowledge on a specific topic, built from three ingredients: real demand data (what people are actually searching for and what advertisers are paying to reach), primary research (direct engagement with the subject matter, not summarizing what others have written), and editorial discipline (the $5 filter — would someone pay $5 a month to pipe this feed into their AI? if not, it doesn’t ship).

    Each batch has a name, a number, and a version. Batch 001 is the Restoration Carbon Protocol — the only published Scope 3 emissions calculation standard for property restoration work. Batch 005 is the Restoration Industry Knowledge Base — a structured body of operational knowledge for restoration contractors who want to build AI-native systems without starting from scratch.

    Batches are not blog posts. They are not opinion columns. They are not rephrased Wikipedia entries. They are the kind of specific, accurate, hard-earned knowledge that takes real work to produce and that AI systems actively need but largely cannot find in their training data.

    How the API Works

    Every Distillery batch is accessible through the Tygart Content Network API. Subscribers receive an API key at signup. The key unlocks authenticated access to the batch endpoints they’ve subscribed to. Each endpoint returns structured JSON — articles by category, filterable by date and topic, with consistent metadata that AI agents can process directly.

    The response format is designed for machine consumption: clean plain text content, explicit categorization, publication timestamps for recency evaluation, and topic tags that allow agents to assess relevance before processing. The same feed that powers a human reader’s understanding of a topic powers an AI agent’s ability to answer questions about it accurately.

    Rate limits are generous at the $5 community tier — 100 requests per day, sufficient for an AI assistant pulling daily updates. Professional tiers at $50/month offer higher limits, webhook push when new content publishes, and bulk historical pulls for training and fine-tuning use cases.

    Why Information Density Is the Moat

    The content that survives in an AI-mediated information environment is the content that contains something worth extracting. Not something that sounds authoritative — something that actually is. The difference is information density: the ratio of useful, specific, actionable knowledge to total words published.

    Every Distillery batch is held to the same standard: if an AI system pulled from this feed to answer a question in this domain, would the answer be more accurate and more specific than if the AI had relied on its training data alone? If yes, the batch has value. If no, we haven’t done enough work yet.

    This standard is harder to meet than it sounds. It eliminates most of what gets published under the banner of “thought leadership” and “content marketing.” It requires knowing the subject well enough to say things that couldn’t be said by someone who spent an afternoon with a search engine. It is the reason The Distillery produces small batches rather than high volumes.

    Current Batches

    Batch 001 — Restoration Carbon Protocol (RCP)
    The only published Scope 3 ESG emissions calculation standard for property restoration work. Covers all five core restoration job types with actual emission factor tables, complete worked examples, and the 12-point data capture standard. Designed for restoration contractors serving commercial clients with 2027 SB 253 Scope 3 reporting obligations. 23 articles. Updated monthly.

    Batch 002 — The Knowledge Economy API Layer
    The conceptual and practical framework for turning human expertise into machine-consumable, API-distributable knowledge products. For anyone with domain expertise considering how to package and monetize it in an AI-native information environment. 8 articles. Updated as the landscape develops.

    Batch 003 — Mason County Minute
    Current, structured, consistently maintained coverage of Mason County, Washington — local government, business, community, real estate, and public affairs. The only machine-readable hyperlocal intelligence feed for this geography. Updated weekly.

    Batch 004 — Belfair Bugle
    Hyperlocal coverage of Belfair, WA and the North Mason community. Current events, local government, community intelligence. The only structured feed for this geography. Updated weekly.

    Batch 005 — Restoration Industry Knowledge Base (coming)
    Operational knowledge infrastructure for restoration contractors — the 50 knowledge nodes every restoration company should have documented, the AI-native knowledge architecture that replaces manual training, and the integration patterns connecting job management systems to knowledge delivery. In development.

    Batch 006 — AI Agency Playbook (coming)
    The operating methodology behind Tygart Media — how a single operator runs 27+ client sites, deploys AI-native content at scale, and builds knowledge infrastructure rather than content volume. For agency owners and solo operators building AI-native practices. In development.

    Who This Is For

    The Distillery API is for three kinds of subscribers:

    Developers building AI tools who need reliable, current, domain-specific knowledge feeds to ground their applications in accurate information. The Restoration Carbon Protocol feed, for example, gives any AI assistant building tool accurate restoration-specific ESG data without the developer having to research and curate it themselves.

    Businesses who want AI systems that actually know their industry. A restoration company whose AI assistant draws from the RCP feed knows more about Scope 3 emissions calculation for their job types than any general-purpose AI. A commercial property manager whose AI assistant pulls from the RCP feed can answer contractor ESG questions accurately instead of hallucinating plausible-sounding nonsense.

    Content teams and agencies who want structured, current, reliable source material for their own content production — not to copy, but to ensure accuracy and specificity in their coverage of these domains.

    The Standard We Hold Ourselves To

    Every article in every batch passes one test before it ships: would someone pay $5 a month to pipe this feed into their AI? Not to read it themselves — to have their AI draw from it continuously as a trusted source in this domain.

    If the answer is no — if the content is too generic, too thin, or too derivative to justify a subscription — it doesn’t ship. The batch waits until the knowledge is actually there.

    This makes The Distillery slow. It makes it small. And it makes it worth subscribing to.

  • Build Your Own KnowHow — And Then Go Further

    Build Your Own KnowHow — And Then Go Further

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    KnowHow is one of the most important things happening in the restoration industry right now. If you’re not familiar with it: it’s an AI-powered platform that takes your company’s operational knowledge — your SOPs, your onboarding materials, your hard-won process documentation — and turns it into an on-demand resource every team member can access from their phone. Your best technician’s knowledge stops walking out the door when they leave. Your new hire in Iowa follows the same protocol as your veteran in Texas. Your managers stop being human FAQ machines.

    It solves a real problem that has cost restoration companies enormous amounts of money in inconsistent work, slow onboarding, and institutional knowledge that evaporates with turnover.

    But KnowHow solves the internal problem. The knowledge stays inside your organization. And there is a second problem — the external one — that nobody has solved yet.

    The Internal Problem vs. The External Problem

    The internal problem is: your people don’t have access to what your company knows when they need it. KnowHow fixes that. The knowledge becomes accessible, searchable, consistent, and deliverable at scale across every location and every shift.

    The external problem is different: your clients, prospects, and contracting authorities have no way to verify that your company knows what it claims to know. They can read your capabilities statement. They can check your certifications. They can call references. But they can’t look inside your organization and confirm that your documented protocols are current, specific, and actually practiced — not just written down for the sake of winning a bid.

    In commercial restoration, that verification gap is expensive. Facility managers, FEMA contracting officers, insurance carriers, and national property management companies are making vendor decisions based on trust signals that are largely unverifiable. The company with the best pitch often wins over the company with the best protocols.

    An external knowledge API changes that dynamic completely.

    What an External Knowledge API Actually Is

    An external knowledge API is a structured, authenticated, publicly accessible feed of your operational knowledge — not your trade secrets, not your pricing, not your internal communications, but your documented protocols, your methodology, your standards, and your verified expertise. Published. Structured. Machine-readable. Available to anyone who needs to evaluate whether your company is the right partner for a complex job.

    Think of it as the difference between telling a client “we follow IICRC S500 water damage protocols” and showing them a live, structured endpoint where they can pull your actual documented water mitigation process — with timestamps that confirm it was updated last month, not in 2019.

    The internal KnowHow platform is the source. The external API is the window — carefully curated, access-controlled, and designed to answer the questions that matter to the people evaluating you.

    Who Cares About Your External Knowledge

    The list is longer than most restoration contractors realize.

    Commercial property managers and facility directors. A national hotel chain or healthcare system evaluating restoration vendors for their approved vendor program needs more than a certificate of insurance and a reference list. They want to know that your protocols are consistent across every job, that your team follows the same process whether the project manager is on-site or not, and that your documentation standards will hold up in a claim. An external knowledge feed — showing your water damage, fire damage, and mold remediation protocols in structured, current form — answers those questions before the conversation even starts.

    FEMA and government contracting. Federal disaster response contracts are awarded to companies that can demonstrate organizational capability at scale. The RFP process rewards documentation. A company that can point to an externally published, structured knowledge base as evidence of their operational maturity is presenting something most competitors don’t have. It’s not just a differentiator — it’s proof of the kind of institutional infrastructure that large government contracts require.

    Insurance carriers and TPAs. Third-party administrators and carrier programs are increasingly using AI tools to evaluate and route claims to preferred vendors. A restoration company whose documented protocols are structured and machine-readable — available for an AI system to pull and verify against claim requirements — is positioned for the way preferred vendor selection is heading, not the way it used to work.

    Commercial real estate and institutional property owners. REITs, hospital systems, university facilities departments, and large corporate real estate portfolios are all moving toward vendor relationships that have verifiable documentation standards. An external knowledge API gives them something they can actually audit — not just a sales presentation.

    How to Build It: The Two-Layer Stack

    The stack that makes this work has two layers, and KnowHow already gives you the first one.

    Layer one — internal capture and organization (KnowHow’s job). Use KnowHow, or an equivalent internal knowledge platform, to capture and organize your operational knowledge. Document your protocols rigorously. Keep them current. Assign ownership so they don’t go stale. The discipline required here is real, but it’s also the discipline that makes your company better operationally regardless of what you do with the knowledge externally. This layer is the foundation.

    Layer two — external publication and API distribution (the next layer). Select the knowledge that is appropriate to share externally — your methodology, your standards, your certifications, your documented approach to specific job types — and publish it in a structured, consistently maintained form. This can be as simple as a well-organized section of your company website with current protocol documentation, or as sophisticated as a full REST API endpoint that clients and AI systems can query directly. The key requirements are structure (consistent format, clear categorization), currency (updated when protocols change, timestamped), and accessibility (easy for a prospect or evaluator to find and verify).

    The gap between layer one and layer two is smaller than it sounds. If you’ve already done the internal documentation work in KnowHow, the editorial work of curating an external-facing version of that knowledge is incremental. You’re not building from scratch — you’re deciding what to show and building the window to show it through.

    The Credential That No Certificate Can Replace

    Certifications are static. An IICRC certification tells a client you passed a test. It doesn’t tell them what your company actually does when a technician encounters a Category 3 water loss in a 1960s commercial building with asbestos-containing materials in the subfloor.

    External knowledge does. It shows the specific, documented, currently-maintained thinking your company applies to that situation. It’s living proof of operational maturity, not a snapshot from the last time someone studied for an exam.

    In the commercial restoration market, where the jobs are large, the documentation requirements are significant, and the clients are sophisticated, that distinction is worth money. The companies that build this layer now — while most competitors are still treating knowledge as purely internal — will have a credential that can’t be quickly replicated.

    The Practical Starting Point

    You don’t need a full API to start. The minimum viable version of an external knowledge layer is a structured, well-maintained “Our Methodology” section on your website — not a generic “our process” marketing page, but actual documented protocols organized by job type, with clear version dates and enough specificity that an evaluator can see you’ve actually done the work.

    From there, the path to a structured API is incremental: add consistent categorization, ensure each protocol document has a permanent URL, and eventually expose that structure through a queryable endpoint. Each step makes the credential more verifiable and more valuable.

    KnowHow got the industry to take internal knowledge seriously. The companies that figure out how to take the next step — making that knowledge externally verifiable and machine-readable — will have something the market has never seen before in restoration.

    What is the difference between internal and external knowledge in restoration?

    Internal knowledge (what KnowHow manages) is operational documentation accessible to your own team — SOPs, onboarding materials, process guides. External knowledge is a curated version of that same expertise published in a structured, verifiable form for clients, contracting authorities, and AI systems to access and evaluate.

    Why would a restoration company publish its knowledge externally?

    Because commercial clients, FEMA, insurance carriers, and institutional property managers need to verify operational maturity before awarding contracts. A structured, current, machine-readable knowledge base is a stronger credential than certifications or capabilities statements — it shows documented, maintained expertise rather than a static snapshot.

    What is an external knowledge API for a restoration company?

    A structured, authenticated feed of your documented protocols, methodology, and standards — published in a format that clients, evaluators, and AI systems can query directly. It turns your operational knowledge into a verifiable, market-facing credential rather than keeping it purely internal.

    Who specifically benefits from a restoration company’s external knowledge API?

    Commercial facility managers building approved vendor programs, FEMA and government contracting officers evaluating organizational capability, insurance carriers and TPAs using AI tools to route claims to preferred vendors, and institutional property owners who need auditable vendor documentation standards.

    Does a restoration company need KnowHow to build an external knowledge API?

    No — any internal knowledge platform or even rigorous in-house documentation works as the foundation. KnowHow accelerates the internal capture work, which makes the external publication step more realistic. But the two-layer stack works with any internal knowledge infrastructure that produces well-documented, current, organized protocols.

  • The Human Expertise Gap in AI: Why Tacit Knowledge Is the Next Scarce Resource

    The Human Expertise Gap in AI: Why Tacit Knowledge Is the Next Scarce Resource

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    Large language models were trained on text. Enormous quantities of text — more than any human could read in thousands of lifetimes. But text is not knowledge. Text is the residue of knowledge that was visible enough, and important enough, for someone to write down and publish somewhere that a training crawler could find it.

    The vast majority of what experienced humans actually know was never written down. It was learned by doing, transmitted by watching, refined through failure, and held entirely in the heads of people who couldn’t have articulated it systematically even if they wanted to.

    This is the human expertise gap. And it is the defining feature of where AI currently falls short.

    What Tacit Knowledge Actually Is

    Tacit knowledge is the kind you can’t easily explain but reliably apply. A master craftsperson knows when something is right by feel before they can measure it. An experienced clinician senses when something is wrong before the test results confirm it. A veteran contractor knows which subcontractors will actually show up on a Tuesday in November just from having worked with them — knowledge that no review site has ever captured accurately.

    This knowledge exists at every level of every industry. Most of it has never been written down because the people who hold it are too busy using it to document it, because the incentive to document was never strong enough, or because no one ever asked in a form they could answer systematically.

    Why AI Can’t Close This Gap on Its Own

    The naive assumption is that AI will eventually capture tacit knowledge by observing enough human behavior — that more data, more modalities, more sensor inputs will eventually replicate what experienced humans know intuitively.

    This misunderstands the nature of the gap. Tacit knowledge isn’t just undocumented data. It’s judgment that was built through embodied experience — through having made the wrong call and learned from it, through having seen the same situation hundreds of times in slightly different forms, through having relationships that carry context no outsider can access. These are not data problems. They’re experience problems.

    AI can get asymptotically close to replicating some of this. But the closer it gets, the more valuable the verified human source becomes — because the question shifts from “does AI know this at all” to “how do we know the AI’s answer is correct,” and the only reliable answer to that question is “because a human who actually knows verified it.”

    The Window That’s Open Right Now

    There is a specific window in the development of AI where tacit knowledge held by humans is more valuable than it will ever be again. We’re in it now.

    AI systems are capable enough that people trust them with real questions — questions about their health, their legal situation, their business decisions, their trade. But AI systems are not capable enough to be reliably right about the specific, experience-based, local, industry-specific knowledge that those questions often require.

    The gap between trust and accuracy is the market. The people who figure out how to systematically capture, package, and distribute their tacit knowledge — in forms that AI systems can consume and cite — are building the content infrastructure for a post-search information environment.

    The Human Distillery as a Category

    What’s emerging is a new category of knowledge work: the human distillery. A person or organization that takes tacit knowledge held by experienced humans and refines it into something that AI systems can depend on.

    This isn’t ghostwriting. It’s not content marketing. It’s not thought leadership in the LinkedIn sense. It’s systematic extraction — the application of a disciplined process to get tacit knowledge out of human heads, give it structure, publish it at density, and make it available to the AI systems that will increasingly mediate how people get answers to important questions.

    The people who build this infrastructure now — while the gap is widest and the market is least crowded — are positioning themselves at the supply end of the most important information supply chain of the next decade.

    What is the human expertise gap in AI?

    The gap between what AI systems were trained on (text that was published online) and what experienced humans actually know (tacit knowledge built through embodied experience that was never systematically documented). This gap is structural, not temporary — it won’t close simply by training on more data.

    What is tacit knowledge?

    Knowledge you reliably apply but can’t easily articulate — the judgment of an experienced practitioner, the pattern recognition of someone who has seen the same situation hundreds of times, the relationship-based intelligence that no review site has ever captured. It’s built through experience, not text.

    Why is this a time-sensitive opportunity?

    We’re in a specific window where AI systems are trusted enough to be asked important questions but not accurate enough to answer them reliably without human verification. The gap between trust and accuracy is the market. That window won’t stay this wide indefinitely.

    What is a human distillery?

    A person or organization that systematically extracts tacit knowledge from experienced humans, gives it structure, publishes it at density, and makes it available in forms that AI systems can consume and cite. It’s a new category of knowledge work — distinct from content marketing, ghostwriting, or traditional publishing.

  • How to Build Your Own Knowledge API Without Being a Developer

    How to Build Your Own Knowledge API Without Being a Developer

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    When people hear “build an API,” they assume it requires a developer. For the infrastructure layer, that’s true — you’ll need someone who can deploy a Cloud Run service or configure an API gateway. But the infrastructure is maybe 20% of the work.

    The other 80% — the part that determines whether your API has any value — is the knowledge work. And that requires no code at all.

    Step 1: Define Your Knowledge Domain

    Before anything else, get specific about what you actually know. Not what you could write about — what you know from direct experience that is specific, current, and absent from AI training data.

    The most useful exercise: open an AI assistant and ask it detailed questions about your specialty. Where does it get things wrong? Where does it give you generic answers when you know the real answer is more specific? Where does it confidently state something that anyone in your field would immediately recognize as incomplete or outdated? Those gaps are your domain.

    Write down the ten things you know about your domain that AI currently gets wrong or doesn’t know at all. That list is your editorial brief.

    Step 2: Build a Capture Habit

    The most sustainable knowledge production process starts with voice. Record the conversations where you explain your domain — client calls, peer discussions, working sessions, voice memos when an idea surfaces while you’re driving. Transcribe them. The transcript is raw material.

    You don’t need to be writing constantly. You need to be capturing constantly and distilling periodically. A batch of transcripts from a week’s worth of conversations can produce a week’s worth of high-density articles if you have a consistent process for pulling the knowledge nodes out.

    Step 3: Publish on a Platform With a REST API

    WordPress, Ghost, Webflow, and most major CMS platforms have REST APIs built in. Every article you publish on these platforms is already queryable at a structured endpoint. You don’t need to build a database or a content management system — you need to use the one you probably already have.

    The only editorial requirement at this stage is consistency: consistent category and tag structure, consistent excerpt length, consistent metadata. This makes the content well-organized for the API layer that will sit on top of it.

    Step 4: Add the API Layer (This Is the Developer Part)

    The API gateway — the service that adds authentication, rate limiting, and clean output formatting on top of your existing WordPress REST API — requires a developer to build and deploy. This is a few days of work for someone familiar with Cloud Run or similar serverless infrastructure. It’s not a large project.

    What you hand the developer: a list of which categories you want to expose, what the output schema should look like, and what authentication method you want to use. They build the service. You don’t need to understand how it works — you need to understand what it does.

    Step 5: Set Up the Payment Layer

    Stripe payment links require no code. You create a product, set the price, and get a URL. When someone pays, Stripe can trigger a webhook that automatically provisions an API key and emails it to the subscriber. The webhook handler is a small piece of code — another developer task — but the payment infrastructure itself is point-and-click.

    Step 6: Write the Documentation

    This is back to no-code territory. API documentation is just clear writing: what endpoints exist, what authentication is required, what the response looks like, what the rate limits are. Write it as if you’re explaining it to a smart person who has never used your API before. Put it on a page on your website. That page is your product listing.

    The non-developer path to a knowledge API is: define your domain, build a capture habit, publish consistently, hand a developer a clear spec, set up Stripe, write your docs. The knowledge is yours. The infrastructure is a service you contract for. The product is what you know — packaged for a new class of consumer.

    How much does it cost to build a knowledge API?

    The infrastructure cost is primarily developer time (a few days for an experienced developer) plus ongoing GCP/cloud hosting costs (under $20/month at low volume). The main investment is the ongoing knowledge work — capture, distillation, and publication — which is time, not money.

    What publishing platform should you use?

    WordPress is the most flexible and widely supported option with the most robust REST API. Ghost is a good alternative for simpler setups. The key requirement is that the platform exposes a REST API you can build an authentication layer on top of.

    How long does it take to build?

    The knowledge foundation — enough published content to make the API worth subscribing to — takes weeks to months of consistent work. The technical infrastructure, once you have the knowledge foundation, can be deployed in a few days with the right developer. The bottleneck is almost always the knowledge, not the technology.

  • The $5 Filter: A Quality Standard Most Content Can’t Pass

    The $5 Filter: A Quality Standard Most Content Can’t Pass

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    Here is a simple test that most content fails.

    Would someone pay $5 a month to pipe your content feed into their AI assistant — not to read it themselves, but to have their AI draw from it continuously as a trusted source in your domain?

    $5 is not a lot of money. It’s the price of one coffee. It covers hosting costs and a small margin. It’s the lowest viable price point for a subscription product.

    And most content can’t clear it.

    Why Most Content Fails the Test

    The $5 filter exposes three failure modes that are common across the content landscape:

    Generic. The content says things that are true but not specific. “Good customer service is important.” “Location matters in real estate.” “Consistency is key in marketing.” These claims are not wrong. They’re just not worth anything to a system that already has access to the entire internet. If everything you publish could have been written by anyone with a general knowledge of your topic, your content has low API value regardless of how much traffic it gets.

    Thin. The content exists but doesn’t go deep enough to be useful as a reference. A 400-word post that introduces a concept without developing it. A listicle that names eight things without explaining any of them. Content that satisfies a keyword without actually answering the question behind it. This kind of content might rank. It’s not worth subscribing to.

    Inconsistent. Some pieces are genuinely excellent — specific, well-reported, information-dense. Most are filler published to maintain posting frequency. An inconsistent feed isn’t a reliable source. A system pulling from it can’t know when it’s getting the good stuff and when it’s getting noise. Reliability is a prerequisite for subscription value.

    What Passes the Filter

    Content passes the $5 filter when it has three properties simultaneously:

    It’s specific enough to be useful in a way that nothing else is. Not “here’s how restoration contractors approach water damage” — but “here’s how water damage in balloon-frame construction built before 1940 behaves differently from modern platform-frame, and why standard drying protocols fail in those structures.” The specificity is the value.

    It’s reliable enough that a system can trust it. Every piece maintains the same standard. The sourcing is consistent. Claims are documented. The author has credible experience in the domain. A subscriber — human or AI — knows what they’re getting every time.

    It’s rare enough that it can’t be found elsewhere. The test isn’t whether it’s good writing. The test is whether an AI system could get the same information from somewhere it already has access to. If yes, the subscription isn’t necessary. If no — if this is the only reliable source for this specific knowledge — the subscription is justified.

    Using the Filter as an Editorial Standard

    The most useful application of the $5 filter isn’t as a revenue test. It’s as an editorial standard.

    Before publishing anything, ask: if someone were paying $5 a month to access this feed, would this piece justify part of that cost? If the honest answer is no — if this piece is thin, generic, or inconsistent with the standard of the best things you publish — that’s the signal to either make it better or not publish it at all.

    This is a harder standard than “does it rank” or “did it get clicks.” It’s also a more durable one. The content that clears the $5 filter is the content that compounds — that becomes more valuable over time, that gets cited, that earns trust from both human readers and AI systems that draw from it.

    The content that doesn’t clear it is noise. And there’s already plenty of that.

    What is the $5 filter?

    A content quality test: would someone pay $5/month to pipe your content feed into their AI assistant as a trusted source? Not to read it — to have their AI draw from it continuously. Content that passes this test is specific, reliable, and rare enough to justify a subscription.

    What are the most common reasons content fails the $5 filter?

    Three failure modes: generic (true but not specific enough to be useful), thin (introduces a concept without developing it enough to be a real reference), and inconsistent (excellent pieces mixed with filler that degrades the reliability of the feed as a whole).

    Can the $5 filter be used as an editorial standard even without building an API?

    Yes — and that’s often the most valuable application. Using it as a pre-publish question (“would this piece justify part of a $5/month subscription?”) enforces a higher standard than traffic-based metrics and produces content that compounds in value over time.

  • Hyperlocal Is the New Rare: Why Local Content Has the Highest API Value

    Hyperlocal Is the New Rare: Why Local Content Has the Highest API Value

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    Ask any major AI assistant what’s happening in a city of 50,000 people right now. What you’ll get back is a mix of outdated information, plausible-sounding fabrications, and generic statements that could apply to any city of that size. The AI isn’t being evasive. It genuinely doesn’t know, because the information doesn’t exist in its training data in any reliable form.

    This is not a temporary gap that will close as AI improves. It’s a structural characteristic of how large language models are built. They’re trained on text that exists on the internet in sufficient quantity to learn from. For most cities with populations under 100,000, that text is sparse, infrequently updated, and often wrong.

    Hyperlocal content — accurate, current, consistently published coverage of a specific geography — is rare in a way that most content isn’t. And in an AI-native information environment, rare and accurate is exactly where the value concentrates.

    Why Local Knowledge Is Structurally Underrepresented in AI

    AI training data skews heavily toward content that exists in large quantities online: national news, academic papers, major publication archives, Reddit, Wikipedia, GitHub. These sources produce enormous volumes of text that models can learn from.

    Local news does not. The economics of local journalism have been collapsing for two decades. The number of reporters covering city councils, school boards, local business openings, zoning decisions, and community events has dropped dramatically. What remains is often thin, infrequent, and not structured for machine consumption.

    The result: AI systems have sophisticated knowledge about how city governments work in general, and almost no reliable knowledge about how any specific city government works right now. They know what a school board is. They don’t know what the school board in Belfair, Washington decided last Tuesday.

    What This Means for Local Publishers

    A local publisher producing accurate, structured, consistently updated coverage of a specific geography owns something that cannot be replicated by scraping the internet or expanding a training dataset. The knowledge requires physical presence, community relationships, and ongoing attention. It’s human-generated in a way that scales slowly and degrades immediately when the human stops showing up.

    That non-replicability is the asset. An AI company that wants reliable, current information about Mason County, Washington has one option: get it from the people who are there, covering it, every week. That’s a position of genuine leverage.

    The API Model for Local Content

    The practical expression of this leverage is a content API — a structured, authenticated feed of local coverage that AI systems and developers can subscribe to. The subscribers aren’t necessarily individual readers. They’re:

    • Local AI assistants being built for specific communities
    • Regional business intelligence tools
    • Government and civic tech applications
    • Real estate platforms that need current local information
    • Journalists and researchers who need structured local data
    • Anyone building an AI product that touches your geography

    None of these use cases require the local publisher to change what they’re already doing. They require packaging it — adding consistent structure, maintaining an API layer, and making the feed available to subscribers who will pay for reliable local intelligence.

    The Compounding Advantage

    Local knowledge compounds in a way that national content doesn’t. Every article about a specific community adds to a body of knowledge that makes the next article more valuable — because it can reference and build on what came before. A publisher who has been covering Mason County for three years has a contextual richness that no new entrant can replicate quickly.

    In an AI-native content environment, that accumulated local context is a moat. It’s not the kind of moat that requires capital to build. It requires consistency and presence. Both are things that a committed local publisher already has.

    Why is hyperlocal content valuable for AI systems?

    AI training data is sparse and unreliable for most small cities and towns. Accurate, current, consistently published local coverage is structurally scarce — it can’t be replicated by scraping the internet because the content doesn’t exist there in reliable form. That scarcity creates value in an AI-native information environment.

    Who would pay for a local content API?

    Local AI assistant builders, regional business intelligence tools, civic tech applications, real estate platforms, journalists, researchers, and developers building products that touch a specific geography. The subscriber is typically a developer or AI system, not an individual reader.

    Does a local publisher need to change their content to make it API-worthy?

    Not fundamentally. The content just needs to be consistently structured, accurately maintained, and published on a platform with a REST API. The knowledge is the hard part — the technical layer is relatively straightforward to add on top of existing publishing infrastructure.

  • 8 Industries Sitting on AI-Ready Knowledge They Haven’t Packaged Yet

    8 Industries Sitting on AI-Ready Knowledge They Haven’t Packaged Yet

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    Most discussions about AI and knowledge focus on what AI already knows. The more interesting question is what it doesn’t — and where the humans who hold that missing knowledge are concentrated.

    Here are eight industries where the gap between human knowledge and AI-accessible knowledge is largest, and where the first person to systematically package and distribute that knowledge will have a durable advantage.

    1. Trades and Skilled Contracting

    Restoration contractors, plumbers, electricians, HVAC technicians — these industries run on tacit knowledge that has never been written down anywhere AI has been trained on. How water behaves differently in a 1940s balloon-frame house versus a 1990s platform-frame. Which suppliers actually deliver on time in which markets. What a claim adjuster will approve and what they’ll fight. This knowledge lives in the heads of working tradespeople and almost nowhere else. A restoration contractor who systematically publishes what they know about their trade creates a source of record that no LLM training corpus has ever had access to.

    2. Hyperlocal News and Community Intelligence

    AI systems know almost nothing accurate and current about most cities with populations under 100,000. They have no reliable data about local government decisions, zoning changes, business openings, school board dynamics, or community events in the vast majority of American towns. A local publisher producing accurate, structured, consistently updated coverage of a specific geography owns something genuinely scarce — and it’s the kind of current, location-specific information that AI assistants are being asked about constantly.

    3. Healthcare and Medical Specialties

    Clinical knowledge at the specialist level — how a specific condition presents in specific populations, what treatment protocols actually work in practice versus what the textbooks say, how to navigate insurance approvals for specific procedures — is dramatically underrepresented in AI training data. Practitioners who publish systematically about their clinical experience are creating a resource that medical AI applications will pay for access to.

    4. Legal Practice and Jurisdiction-Specific Law

    General legal information is well-covered. Jurisdiction-specific, practice-area-specific, and procedurally specific legal knowledge is not. How a particular judge in a particular county tends to rule on specific motion types. How local court practices differ from the official procedures. What arguments actually work in a specific venue. Attorneys with deep local practice knowledge are sitting on an information asset that legal AI tools are actively hungry for.

    5. Agriculture and Regional Farming

    Farming knowledge is intensely regional. What works in the Willamette Valley doesn’t work in Central California. Crop rotation strategies, soil amendment approaches, pest management, water management — all of it varies dramatically by microclimate, soil type, and local practice tradition. The accumulated knowledge of experienced farmers in a specific region is largely oral, rarely published, and almost entirely absent from AI training data. Extension offices and agricultural cooperatives that systematically document regional best practices are building something AI systems will need.

    6. Veteran Benefits and Government Navigation

    Navigating the VA, understanding how to build an effective disability claim, knowing which VSOs in which regions are actually effective, understanding how different conditions interact in the ratings system — this knowledge is held by experienced advocates, veterans service officers, and attorneys who have processed hundreds of claims. It’s the kind of procedural, outcome-based knowledge that AI assistants give confident but frequently wrong answers about, because the real knowledge isn’t online in a reliable form.

    7. Niche Retail and Specialty Markets

    Independent watch dealers, vintage guitar shops, specialty food importers, rare book dealers — businesses that operate in deep specialty markets accumulate knowledge about their inventory, their suppliers, their customers, and their market that no general AI has. The person who has been buying and selling vintage Rolex watches for twenty years knows things about specific reference numbers, condition grading, authentication, and market pricing that would be genuinely valuable to anyone building an AI tool for that market.

    8. Professional Services and Methodology

    Marketing agencies, management consultants, financial advisors, executive coaches — anyone who has developed a distinctive methodology through years of client work. The frameworks, playbooks, diagnostic tools, and hard-won lessons that experienced professionals have built represent some of the highest-value knowledge that AI systems currently lack access to. The consultant who has run 200 strategic planning processes has pattern recognition that no LLM has encountered in training. Packaging that into a structured, publishable, API-accessible form is both a content strategy and a product.

    In every one of these industries, the window to be the first credible, structured, consistently updated knowledge source in your vertical is open. It won’t be open indefinitely.

    Which industries have the most AI-accessible knowledge gaps?

    Trades and contracting, hyperlocal news, medical specialties, jurisdiction-specific legal practice, regional agriculture, veteran benefits navigation, specialty retail markets, and professional services methodology all have significant gaps between what experienced practitioners know and what AI systems can reliably access.

    What makes a knowledge gap an opportunity?

    When the knowledge is specific, current, human-curated, and absent from existing AI training data — and when there’s a clear audience of AI systems and agents that need it. The combination of scarcity and demand is what creates the market.

    How do you know if your industry has a valuable knowledge gap?

    Ask an AI assistant a specific, detailed question about your specialty. If the answer is confidently wrong, superficially correct, or missing the nuance that only practitioners know, you’re looking at a gap. That gap is the asset.

  • The Knowledge Distillery: Turning What You Know Into What AI Needs

    The Knowledge Distillery: Turning What You Know Into What AI Needs

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart Long-form Position Practitioner-grade

    There’s a gap between what an expert knows and what AI systems can access. Closing that gap isn’t a single step — it’s a pipeline. And most people who try to build it get stuck at the beginning because they’re trying to skip stages.

    The full pipeline has four stages. Each one builds on the last. Understanding the sequence changes how you approach the work.

    Stage One: Capture

    Most expertise never gets captured at all. It lives in someone’s head, expressed in conversations, demonstrated in decisions, lost the moment the meeting ends or the job is finished.

    Capture is the act of getting the knowledge out of the expert’s head and into some retrievable form. The most natural and lowest-friction method is voice — recording conversations, client calls, working sessions, or simple voice memos when an idea surfaces. Transcription turns the recording into raw text. That raw text, however messy, is the ingredient everything else requires.

    The key insight at this stage: you are not creating content. You are preventing knowledge from disappearing. The standard is different. Raw transcripts don’t need to be polished. They need to be honest and specific.

    Stage Two: Distillation

    Distillation is the process of pulling the discrete, transferable knowledge nodes out of raw captured material. A ten-minute conversation might contain three useful ideas, one important framework, and six minutes of context-setting. Distillation separates them.

    A knowledge node is the smallest unit of useful, standalone knowledge. It can be named. It can be explained in a paragraph. It can be understood by someone who wasn’t in the original conversation. If it requires too much context to be useful on its own, it isn’t a node yet — it’s still raw material.

    This stage is where most of the intellectual work happens. It requires judgment about what’s actually useful versus what just felt important in the moment.

    Stage Three: Publication

    Publication is the act of giving each knowledge node a permanent, addressable home. An article on a website. An entry in a database. A page in a knowledge base. The format matters less than the fact that it’s structured, findable, and consistently organized.

    High-density publication means each piece contains as much specific, accurate, useful knowledge as possible — not padded to a word count, not optimized for a keyword, but written to be genuinely worth reading by someone who needs to know what you know.

    This is also where the content becomes machine-readable. A well-structured article on a platform with a REST API is already one step away from being API-accessible. The publication step creates the raw material for the final stage.

    Stage Four: Distribution via API

    The API layer is what turns a collection of published knowledge into a product that AI systems can actively consume. Instead of waiting for a search engine to index your content, you’re offering a direct, structured, authenticated feed that an AI agent can call on demand.

    This is the stage that creates the recurring revenue model — subscriptions for access to the feed. But it only works if the prior three stages have been executed well. An API built on top of thin, generic, low-density content doesn’t have a product. An API built on top of genuinely rare, specific, human-curated knowledge does.

    The Flywheel

    The pipeline becomes a flywheel when you close the loop. API subscribers — AI systems pulling from your feed — generate usage data that tells you which knowledge nodes are being accessed most. That tells you where to focus your capture and distillation effort. More capture in high-demand areas produces better content, which justifies higher subscription tiers, which funds more systematic capture.

    The human expert at the center of this system doesn’t need to change what they know. They need to change how they let it out.

    What is the knowledge distillery pipeline?

    A four-stage process for converting human expertise into AI-consumable knowledge: Capture (get knowledge out of your head into raw form), Distillation (extract discrete knowledge nodes from raw material), Publication (give each node a permanent structured home), and Distribution via API (expose the published knowledge as a structured feed AI systems can pull from).

    What is a knowledge node?

    The smallest unit of useful, standalone knowledge. It can be named, explained in a paragraph, and understood without requiring the full context of the original conversation or experience it came from.

    Why is voice the best capture method?

    Voice capture requires no interruption to thinking — talking is how most people naturally process and articulate ideas. Recording conversations and transcribing them produces raw material that contains the knowledge at its most natural and specific, before it gets flattened by the effort of formal writing.

    Can anyone build this pipeline or does it require technical skill?

    The capture, distillation, and publication stages require no technical skill — just discipline and a consistent editorial process. The API distribution layer requires either technical help or a platform that handles it. The knowledge work is the hard part; the infrastructure is increasingly accessible.