Category: Agency Playbook

How we build, scale, and run a digital marketing agency. Behind the scenes, systems, processes.

  • The Cockpit Session Protocol: How to Pre-Stage AI Context for Zero-Warmup Work Sessions

    The Cockpit Session Protocol: How to Pre-Stage AI Context for Zero-Warmup Work Sessions

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    Most AI sessions start the same way. The operator opens a conversation and begins re-explaining: what the project is, what happened last session, where things stand, what they’re trying to accomplish today. This re-explanation is invisible overhead. It costs time, it costs context tokens, and it costs the cognitive energy that should go toward actual work.

    The cockpit session pattern eliminates this overhead entirely. The context is pre-staged before the session opens. The operator arrives to a working environment that is already mission-ready — client brief loaded, task queue clear, relevant history surfaced, tools oriented to the problem at hand. The warm-up is done before the session starts.

    The name comes from aviation logic. A pilot doesn’t climb into the cockpit and begin configuring instruments. The pre-flight checklist runs before the seat is taken. By the time the pilot is in position, the environment is ready for work — not for setup. The cockpit session applies the same principle to knowledge work.


    Why This Matters More Than It Looks

    The cost of a cold session start isn’t just the five minutes of re-explanation. It’s the quality degradation that runs through the entire session while the AI is still assembling the picture. Early in a cold session, you’re managing the AI — filling gaps, correcting assumptions, orienting the system. Mid-session, you’re working with the AI. The cockpit pattern collapses that warm-up phase so the session starts at mid-session quality from the first message.

    For a solo operator running multiple business lines, this compounds. If every client session starts cold, every session pays the loading cost. If four clients each require ten minutes of context reconstruction per session, that’s 40 minutes per week of re-explanation before any work begins — and the work done during re-explanation is lower quality than the work done after context is established.

    There’s a second problem beyond time: decision drift. When every session reconstructs context from what you happen to mention that day, the AI’s understanding of your situation shifts based on what you emphasize. A context that was staged deliberately — including the things you’d otherwise forget to mention — produces more consistent output than a context assembled ad hoc from whatever is top of mind.


    What a Cockpit Session Actually Contains

    A properly staged cockpit has five components. The specifics vary by context — a client site session looks different from a content strategy session looks different from an infrastructure session — but the structure is consistent.

    1. The active brief. What are we working on in this session specifically? Not a general description of the project — the specific problem or output for today. “Publish 12 articles to Partners Restoration and optimize for the custom home builder cluster” is a brief. “Work on Partners Restoration content” is not.

    2. Current state. Where does the project stand right now? What was done in the last session? What is pending? This is the context that prevents re-work and prevents missing dependencies. In the Second Brain, this lives in the client’s Notion page — status fields, last session notes, pending task flags.

    3. Hard constraints. What can’t we do, break, or change in this session? For WordPress work: the page guard rule, which sites use which connection methods, what was explicitly decided in prior sessions that shouldn’t be re-litigated. For content work: which keywords are already covered, which clusters are complete, what the taxonomy looks like. Constraints are the most expensive thing to discover mid-session, so they go in the cockpit.

    4. Priority signal. If this session produces one thing of value, what is it? The single most important output. This prevents sessions that produce ten mediocre things instead of one excellent thing, which is the default failure mode of open-ended AI sessions.

    5. Known failure modes. What has gone wrong in similar sessions before? The GCP/Vertex AI content rule — never write model specifications without live verification — is a known failure mode that belongs in every cockpit where GCP content might be produced. The page guard rule belongs in every WordPress session. Known failure modes in the cockpit prevent known failures in the session.


    How the Cockpit Reduces Minimum Viable Executive Function

    This is the piece that connects the cockpit session to the neurodiversity design framework it comes from. Executive function in ADHD is variable, not uniformly low. On a high-executive-function day, a complex multi-step session runs cleanly. On a low-executive-function day, the same session can feel impossible — not because the capability is absent, but because the activation energy required to start is higher than what’s available.

    A cold session has high activation energy. You have to figure out where things stand, decide what to work on, load the relevant context into working memory, orient the AI to the problem, and then begin work. For a low-executive-function day, that sequence can be the entire obstacle.

    A pre-staged cockpit has low activation energy. The state is already loaded. The priority is already identified. The constraints are already in the context. The question isn’t “where do I start” — it’s “do I proceed.” That’s a dramatically smaller decision to make, and it means that low-executive-function days can still be productive days rather than lost ones.

    The infrastructure carries the initiation overhead so the operator’s variable executive function goes further. This is why the cockpit pattern is the single highest-leverage habit in an AI-native operation — not because it saves time, though it does, but because it extends the range of days when useful work can happen at all.


    The Cockpit as Transferable Protocol

    One of the underappreciated properties of the cockpit pattern is that it’s packageable. A cockpit that Will stages for himself runs at Will’s speed because Will knows what to put in it. A cockpit that’s been designed as a repeatable protocol — with a specific template, specific data pulls from the Second Brain, specific constraint checks — can be staged by anyone with access to the system.

    This is the multi-operator scaling moment: when a second person (a developer, a contractor, a hired editor) needs to run a session that produces Will-level output, the cockpit protocol is the bridge. The institutional knowledge that makes Will’s sessions productive is encoded in the cockpit template. The new operator follows the protocol. The session starts at the same quality level.

    Most operations don’t have this. The experienced operator’s sessions are good because of knowledge that lives in their head, not in the system. When they’re unavailable, session quality drops. The cockpit pattern makes session quality a property of the system, not a property of the individual — which is the design goal for any operation that needs to scale beyond one person.


    Frequently Asked Questions

    How long does it take to stage a cockpit?

    For a session type you’ve run before: three to five minutes once the Notion pages and context sources are organized. For a new session type: fifteen to twenty minutes to design the template, then three to five minutes to run it going forward. The upfront design cost is paid once; the recurring benefit is captured every subsequent session.

    What if the pre-staged context is wrong or outdated?

    Correct it at the start of the session and update the source. The cockpit is the starting point, not the oracle. If the Notion page shows stale status, update the status before proceeding. The correction takes thirty seconds and improves the cockpit for next time. Wrong context in the cockpit is a data quality problem — fix it at the source rather than working around it each session.

    Does this work without a Second Brain or Notion?

    A simpler version works anywhere you can store context. A Google Doc with current project state, a notes file with known constraints, a short text file with today’s priority — these produce meaningful improvement over cold sessions even without a full Second Brain architecture. The full version with Notion, claude_delta metadata, and automated context pulls is more powerful, but the core behavior (pre-stage before you start) produces value immediately with whatever you have.


  • Network-Led Sales vs. Cold Outreach: The Structural Difference That Makes the Math Incomparable

    Network-Led Sales vs. Cold Outreach: The Structural Difference That Makes the Math Incomparable

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    Cold outreach is a tractable problem. You can model it, optimize it, and predict results within a reasonable range. Contact enough people with a good message, a percentage respond, a percentage of those convert, your cost per acquisition is the math between those numbers. Scale it up, the math holds. The model is reliable and the ceiling is low.

    Network-led sales is harder to model and harder to build. It requires investment that precedes pipeline by months or years. It requires genuine participation in something for its own sake, not instrumentally. It requires patience that quarterly metrics don’t reward. And when it works, the results are not comparable to cold outreach — not just better, structurally different.

    The Structural Difference

    In cold outreach, every prospect starts at zero. They don’t know you. Your credibility is what you can establish in the first message and the first conversation. The objection at the top of the funnel is “who are you and why should I trust you” — a hard objection to overcome without time and proof.

    In network-led sales, the prospect has context before the conversation starts. They’ve seen your name in the organization they trust. They’ve heard from peers that you’re credible. They may have had a brief interaction at an event that established you as a real person rather than a pitch. The objection at the top of the funnel shifts from “why should I trust you” to “is this the right time” — a fundamentally different and more solvable problem.

    The PE firm trying to conduct industry research by hiring interviewers and making cold calls to restoration contractors gets data quality consistent with cold outreach: filtered, optimistic, what people are comfortable telling a stranger. The person who has been inside the industry’s trust network for three years, who is known to the people they’re talking to as a peer and a contributor, gets data quality consistent with what people tell someone they trust: unfiltered, real, the actual benchmarks and the actual failure modes.

    The same dynamic applies to sales. The pitch that comes cold from an unknown agency gets evaluated on its stated merits alone. The introduction that comes through a trusted peer, in a context the prospect already values, gets evaluated in a frame that assumes credibility. The starting conditions are not comparable.

    The Timeline Problem

    Network-led pipeline is not a Q1 strategy. The relationship that converts to a client in month 18 started at an event in month three. The contractor who became a client after showing up at six events and having a real conversation at the seventh doesn’t fit in a quarterly pipeline report. They represent the compounding return on a three-year investment in showing up.

    This is why most agencies don’t do it. The payoff horizon is incompatible with quarterly accountability. For a solo operator with a long time horizon and an existing book of business that covers operations, the calculus is different. The network investment builds the distribution that makes the business defensible in year five, not the revenue that justifies the budget in Q3.

    Cold outreach fills the pipeline this quarter. Network-led growth fills it for years without the marginal cost of each new conversation starting at zero. The choice between them is a choice about time horizon, not about which produces better results — over a sufficient time horizon, network-led growth wins on every metric except speed of initial results.


  • Using Network Chapters as Distribution Nodes: The Math Behind Sponsored Network Pipeline

    Using Network Chapters as Distribution Nodes: The Math Behind Sponsored Network Pipeline

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    A chapter is a room. The room contains people who do business with each other in a specific geography. The room meets regularly, in an environment that builds genuine relationships. The room trusts the organization that convened it.

    From a distribution standpoint, that’s almost an unfair asset.

    Cold outreach to restoration contractors in Phoenix produces results consistent with cold outreach to anyone: under 5% response rate on a good day, conversion rates measured in single digits. An introduction at an RGL Phoenix event — made by a chapter ambassador who the contractor already trusts — produces results consistent with a warm referral from a peer. Same product. Same price. Different relationship context. Dramatically different conversion.

    The Chapter Multiplication Effect

    Seventeen chapters means seventeen geography-specific trust networks, each with their own membership of contractors, adjusters, agents, vendors, and property managers. Each chapter runs multiple events per year. Each event is an opportunity to be introduced, in context, to people who already know the organization that vouched for you.

    The cost of accessing those introductions through traditional sales channels — hiring sales reps, running targeted ads, attending trade shows, building local SEO in seventeen markets — is not comparable. The network does the geographic distribution. The sponsorship buys access to the network’s trust infrastructure at a fraction of the cost of building it independently.

    The Vendor Cascade

    Each restoration company is a node with a vendor ecosystem behind it. The plumber they call for every water damage job. The roofer they sub after fire losses. The HVAC contractor they recommend when the remediation is done. The general contractor they partner with on large rebuilds.

    Every one of those vendors needs what a restoration-focused digital agency provides. And the introduction that produces a new vendor client doesn’t come from cold outreach — it comes from the restoration contractor who says “this is my SEO guy, he understands our industry, you should talk to him.” That introduction is warm by definition. The vendor already trusts the person making it.

    The chapter model turns one restoration client into three to five adjacent opportunities. Seventeen chapters with one to two restoration clients each produces a referral network that compounds. The math isn’t complicated. The patience to let it develop is the hard part.

    Presence Without Travel

    The secondary distribution effect is content. Articles, frameworks, and resources published with RGL positioning reach chapter memberships across all seventeen markets without requiring physical presence in any of them. A post that serves restoration professionals in Phoenix also serves them in Houston, Denver, Charlotte, and Southern California.

    The chapter events create the trust layer. The content maintains presence between events. Combined, the sponsorship produces a distribution footprint that would cost significantly more to replicate through advertising or direct outreach — and produces a qualitatively different kind of visibility, because it’s embedded in a community rather than broadcast at one.


  • Golf as B2B Trust Infrastructure: Why Four Hours on a Course Builds What Meetings Can’t

    Golf as B2B Trust Infrastructure: Why Four Hours on a Course Builds What Meetings Can’t

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    Most B2B networking formats have a fundamental problem: everyone in the room knows they’re there to network. That awareness changes behavior. The pitch antenna goes up. The business card comes out. The conversation is conducted with at least one eye on whether this person is a useful contact.

    Golf solves this problem structurally. The stated purpose of being on a golf course is golf. The conversation that happens alongside it is incidental — which is exactly what makes it not incidental at all.

    What Four Hours Does That Other Formats Can’t

    A trade show interaction is five minutes if it goes well. A coffee meeting is forty-five. A lunch is ninety. A round of golf is four hours, in a setting with no phones, no presentations, no agenda, and a shared activity that provides natural conversation scaffolding without requiring anyone to perform networking.

    The time matters because trust is built through accumulation of low-stakes interactions, not through single high-stakes ones. Four hours of casual, peer-level conversation between a restoration contractor and a property manager produces a different kind of relationship than four forty-five minute coffee meetings over a year — even though the total time is similar. The continuity, the physical proximity, the shared experience of a bad hole or a good shot, the moment when someone’s guard comes down because they’re focused on a putt — these accumulate into something that scheduled meetings can’t replicate.

    Why It Works Especially Well in the Trades

    In industries where trust determines who gets the call, the quality of the relationship is the product. A property manager with a water loss at 2am is not running a procurement process. They’re calling the person they trust most to handle it correctly. Golf builds the trust layer that makes you that person.

    The restoration industry specifically runs on referral relationships — adjuster to contractor, property manager to contractor, contractor to specialty subcontractor. Every link in that chain is a trust relationship that preceded a business transaction. The contractors who consistently get the best work are not the ones with the best website or the highest review count. They’re the ones whose names come to mind first when someone needs to make a recommendation.

    Golf is the environment where those names get lodged. Not through a pitch — through four hours of being a person someone enjoyed spending time with.

    The Peer-Level Dynamic

    Golf enforces equality in a way that most business environments don’t. On the course, everyone is equally subject to the conditions. The senior adjuster and the junior contractor are having the same experience — same wind, same rough, same pressure on the 18th. This equality of condition produces peer-level conversation that rarely happens in settings where professional hierarchy is visible.

    Peer-level conversation is where trust forms. When someone shares a genuine opinion about a difficult claim, a frustrating TPA policy, or a subcontractor who keeps letting them down — information they’d never share in a formal meeting — the relationship has moved to a level that formal networking cannot produce. That’s the golf infrastructure working.


  • The Sponsor Advantage: How to Build Regional B2B Pipeline Through a Network You Don’t Own

    The Sponsor Advantage: How to Build Regional B2B Pipeline Through a Network You Don’t Own

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    I sponsor a golf league.

    Not a tour. Not a country club event. A B2B networking league built around the property damage restoration industry — contractors, adjusters, vendors, consultants, equipment suppliers, TPAs. Seventeen chapters across the country, each running events in their local market, each building the same thing: a room full of people who do business together, on a golf course, without their phones in their hands for four hours.

    I didn’t build it. I didn’t found it. I didn’t hire the chapter ambassadors or negotiate the venues or design the scoring format. Those people did the work of building the organization. What I did was recognize what I was looking at and invest accordingly.

    That distinction — sponsor versus owner — is the entire strategic point. And it’s almost never discussed in the literature about B2B networking, which tends to assume that to benefit from a network you need to run it.

    You don’t. In some situations, you get more from being the most committed non-founder in the room than you would from being the founder. This is one of those situations, and understanding why requires understanding what a sponsored network actually provides versus what organizational ownership provides.


    What the Owner Has That the Sponsor Doesn’t

    The organization’s founder has control. They set the membership criteria, the chapter structure, the event format, the brand standards. They make the decisions about which markets to enter, which sponsors to accept, which directions to grow. They bear the operational overhead — the logistics, the coordination, the member management, the chapters that underperform and need attention.

    Control is valuable. Operational overhead is expensive. For a solo operator running an AI-native content agency, the overhead of running a 17-chapter national networking organization is not compatible with the overhead of running 27 client WordPress sites, building content infrastructure, managing a GCP stack, and doing the writing. The person who built RGL made it their primary vehicle. I couldn’t make it mine without sacrificing what I’ve built elsewhere.

    So I don’t have control. What do I have instead?


    What the Committed Sponsor Has That the Owner Doesn’t

    Credibility without burden. Trust without administration. Presence in every chapter market without the cost of maintaining a presence in every chapter market.

    When a restoration contractor in Phoenix meets me at an RGL event, the context of that meeting is: I’m the person who invested in this thing they’re already part of, in their market, because I believe in what it’s doing. That’s a fundamentally different first impression than cold outreach. It’s even different from a vendor booth at a trade show, where the context is: I paid to have access to this audience.

    Sponsorship inside a trust network signals alignment, not just interest. The people in the room are already there because they chose to participate in something that requires showing up — physically, repeatedly, over time. A sponsor who shares that belief system is perceived as one of them, not as someone who bought access to them.

    The second thing the committed sponsor has: distributed presence. Seventeen chapters run events throughout the year in seventeen markets. Every event is an opportunity for Tygart Media to be in the room — not because I’m traveling to seventeen markets, but because the sponsorship means my name and my work are part of the organization’s identity in each of them. The chapter ambassador in Charlotte is introducing me as a sponsor before I’ve ever been to Charlotte. That’s distribution I couldn’t buy with advertising and couldn’t build with cold outreach.


    The Trust Infrastructure That Golf Specifically Builds

    The vehicle matters. RGL is a golf league, not a trade association or a conference or a LinkedIn group, and the choice of golf is not arbitrary. Golf creates something that almost no other B2B networking format creates: four uninterrupted hours of low-stakes, relationship-building conversation between people who are ostensibly there for something other than business.

    The property manager and the restoration contractor are walking the same fairway, waiting for the same slow group ahead, talking about whatever comes up. The insurance adjuster and the equipment rep are sharing a cart for two hours. None of this is structured. None of it is a pitch. The relationship that forms is peer-level because golf is a peer-level environment — everyone is equally subject to the wind, the rough, and the occasional shank.

    Compare this to the environments where most B2B relationships in the restoration industry form: trade show floors (loud, transactional, everyone scanning badges), vendor lunch programs (one party is clearly the host with an agenda), referral calls (cold or at best lukewarm, purpose-driven from the first sentence), and job sites (one party has positional authority over the other). None of these formats produce the kind of trust that golf produces, because none of them have four hours and no agenda.

    The research on this is consistent: golf relationships convert to business relationships at higher rates than almost any other networking format, particularly in industries where trust determines who gets the call — construction, financial services, professional services, and the trades broadly. In restoration specifically, where a property manager is handing over a damaged building to someone they need to trust not to make it worse, the relationship quality matters enormously. A contractor who the PM has played golf with three times is not the same as a contractor who submitted the lowest bid on a cold RFP.


    Chapters as Distribution Nodes

    Here is the math that the second brain has been working on since I started taking the RGL sponsorship seriously.

    Each chapter is a node in a trust network that contains: restoration contractors, insurance adjusters, insurance agents, public adjusters, equipment suppliers, specialty subcontractors, TPAs, and property managers. These are exactly the people who need what Tygart Media builds — SEO-optimized WordPress infrastructure, AI-native content pipelines, local search visibility.

    A cold outreach to a restoration contractor in Phoenix gets a response rate consistent with cold outreach to anyone: under 5% on a good day, often much less. An introduction at an RGL Phoenix event — “this is Will, he’s the guy who sponsors the league, he runs digital for restoration companies” — gets a response rate consistent with a warm referral from a trusted peer. The same information, the same product, the same price, presented in two different relationship contexts, produces dramatically different conversion.

    The compounding effect: each contractor client who comes through an RGL chapter introduction has a vendor ecosystem behind them. The plumber they call for every water damage job. The roofer they sub to after fire losses. The HVAC contractor they recommend when the remediation is done. Every one of those vendors needs the same thing — local SEO, a website that works, someone who understands their industry because they’re already inside it. The restoration company owner introduces you because you’re their person. You’re not pitching a cold vendor. You’re getting handed the relationship.

    Seventeen chapters, running multiple events per year each. The math isn’t complicated. The question is whether the distribution infrastructure is being used strategically or just passively.


    Network-Led Sales vs. Cold Outreach: The Structural Difference

    Cold outreach is a numbers game. You contact enough people, a percentage respond, a percentage of those convert. The ratio is predictable and it’s low. The cost per acquisition is high because the conversion rate at the top of the funnel is low. This is the model most agencies run on because it’s scalable and doesn’t require the patience or investment that network-led growth requires.

    Network-led sales is an entirely different model. The funnel starts not at outreach but at relationship. The relationship precedes the sales conversation. When the sales conversation happens — if it needs to happen at all — the context is already favorable. The prospect already knows who you are and why you’re credible. The objection is not “I don’t know you” but “is this the right time” — a much more solvable problem.

    The tradeoff is time and investment. Network-led growth requires consistent presence over time, investment in the network’s success (not just personal extraction from it), and patience for the trust to compound before the pipeline materializes. For someone who wants clients this quarter, it’s too slow. For someone building a durable operation over years, it’s the only model that actually compounds.

    The RGL sponsorship is a three-year investment that is still in early returns. The relationships built in year one convert in year two or three. The contractor who saw my name at six events and then had a conversation over drinks at the seventh is not comparing me to a cold outreach from a competitor — I’m already the default. The comparison set is empty.


    What the Sponsorship Requires to Work

    Passive sponsorship — writing a check and putting your logo on the website — produces brand awareness among people who are passively aware of the organization. That has some value and not much.

    Active sponsorship — showing up, contributing, becoming genuinely part of the community — produces something different. The sponsorship that builds real pipeline requires the same thing the best sales relationships have always required: genuine investment in the other party’s success before asking for anything.

    For RGL, that means showing up at chapter events when possible. Contributing content that serves the membership — articles, resources, frameworks that help restoration companies build better operations — not content that promotes your services. Introducing members to each other when you see an opportunity. Being the person in the network who gives more than they take, for long enough that the network comes to see you that way.

    This is not a counterintuitive strategy. It’s the oldest sales strategy there is. What makes it work in a sponsored network specifically is that the organization does the community-building work for you. You don’t have to gather the room — the league gathers the room. You show up in the room that already exists and you add value. The infrastructure belongs to someone else. The trust you build inside it belongs to you.


    Frequently Asked Questions

    How do you measure ROI on a sponsorship like this?

    The direct measure is client relationships that originated through RGL introductions. The indirect measure is harder but more important: the inbound reputation that makes cold outreach unnecessary for a growing percentage of new business. Sponsorship ROI is measured in years, not quarters. The mistake is applying quarterly conversion metrics to a relationship investment that operates on a different timeline.

    What’s the difference between sponsoring a network and advertising to it?

    Advertising is transactional — you pay for access to an audience and they see your message with the full awareness that you paid for the access. Sponsorship of a trust network is relational — you invest in the community’s infrastructure and are perceived as a member of it, not a vendor pitching at it. The same people receive both messages differently. The conversion dynamic is not comparable.

    Does this strategy require significant travel and in-person time?

    In-person presence amplifies it significantly but isn’t the only input. The content contribution — articles, frameworks, resources that RGL members find genuinely useful — builds presence in every chapter market without travel. The person who shows up at events AND provides consistent value between events compounds faster than someone doing either alone.

    Can this model be replicated in other industries?

    Yes, with one prerequisite: the network has to actually exist and have genuine trust value. A manufactured networking organization, or one where membership is purely transactional, doesn’t produce the same effect. The RGL works because the golf format builds real relationships and the industry focus means every room is full of people who actually do business together. The model transfers to any field where a genuine trust network exists and where sponsorship access is available — which is most industries, because most genuine trust networks are underwritten.



  • Replacing the Interviewer: What the Human Distillery App Can and Cannot Do

    Replacing the Interviewer: What the Human Distillery App Can and Cannot Do

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    The extraction protocol works. The pivot signal lexicon is learnable. The four-layer descent can be taught. The question is whether it can be deployed without a trained human interviewer in the room — and if so, how much of the value survives the translation.

    This is the duplication problem at the center of the Human Distillery business model. Will can run an extraction session. An app cannot run the same session. But an app can run a version of the session — and for a large subset of extraction use cases, the version is sufficient.

    Understanding what transfers and what doesn’t is the whole architectural question.

    What Transfers to an App

    The four-layer question structure is codifiable. A stateful conversational agent — not a chatbot, a system that maintains a running knowledge map of what’s been surfaced and what’s still needed — can execute the question sequences in order, navigate the domain-specific question libraries for a given vertical, and detect the linguistic markers of pivot signals in real time.

    “It’s hard to explain” is detectable by NLP. Hedging patterns are detectable. Energy shifts in voice are detectable by acoustic analysis. Deflection to process — “the policy says…” — is detectable. The app can recognize these signals and adjust its question path, slowing down at tacit knowledge boundaries and applying the correct follow-up from the signal response library.

    The processing pipeline from transcript to structured concentrate is fully automatable: chunking by topic boundary, entity extraction, claim isolation, confidence scoring, contradiction flagging across multiple sessions, multi-model distillation rounds. This is where AI earns its keep. A human doing this manually would take days per session. The pipeline does it in minutes.

    Domain-specific question libraries can be built from prior extractions and expanded with each new session. The more sessions the app runs in a given vertical, the richer its question library becomes. This is the compounding effect that makes the app more valuable over time.

    What Doesn’t Transfer

    Three things resist automation in ways that won’t be resolved by better models:

    Micro-hesitation reading. The half-second pause before an answer that signals the subject knows more than they’re about to say. The slight change in phrasing when someone moves from what they’re comfortable saying to what they actually think. These are real-time, embodied, relational signals. A text-based app misses them entirely. A voice app gets closer but still lacks the visual channel that carries a significant portion of this information.

    Protocol abandonment. The decision to stop following the four-layer sequence because the subject just said something unprompted that is more important than anything in the protocol. Expert interviewers make this call constantly. They recognize the thread that, if followed, goes somewhere the protocol would never reach. An app will follow the signal response library. It won’t recognize when the library should be put down.

    Trust calibration. Whether the subject is performing for the recording or actually sharing. This is not detectable from content analysis. It requires the social intelligence to know when to lower the formality, when to match the subject’s energy, when to say something self-deprecating to signal that this is a peer conversation and not an evaluation. Subjects share differently with someone they trust. The app cannot build that trust.

    The Honest Architecture

    The tiered model that emerges from this analysis:

    Tier 1 — App-led extraction. Well-mapped domains with accessible knowledge. The subject is cooperative. The question library is deep. The knowledge being sought is in Layers 1 and 2. The app handles the session. Will reviews the concentrate before delivery.

    Tier 2 — Human-led extraction with app processing. High-stakes sessions. Guarded subjects. Knowledge at the outer edge of verbalization (Layer 3 and 4). Will conducts the session. The app runs the processing pipeline. Will reviews and approves the concentrate.

    Tier 3 — Full human extraction and distillation. Strategic engagements. Subjects who will only speak candidly to a person they know. Knowledge so embedded that it requires real-time relational judgment to surface at all. Will does everything.

    The business model implication: Tier 1 is volume. Tier 3 is premium. The ratio shifts over time as the app’s question libraries deepen and its signal detection improves. What begins as mostly Tier 2 and 3 eventually becomes mostly Tier 1, with Will’s direct involvement reserved for the sessions where only a human can get the door open.

    The app is not a replacement for the protocol. It’s a multiplier for the protocol — allowing it to run at a scale that a single human operator never could, while preserving the human layer for the cases that actually require it.


  • Books for Bots: What a Knowledge Concentrate Actually Is and How It’s Built

    Books for Bots: What a Knowledge Concentrate Actually Is and How It’s Built

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    A transcript is not a knowledge artifact. Neither is a summary. Both are containers for words. Neither is optimized for the thing that needs to consume them.

    When you capture an expert’s knowledge and then feed the transcript to an AI system, the AI gets the words. It does not get the structure. It does not know which claims are firsthand vs. secondhand. It cannot distinguish a confident assertion from a hedged one. It has no way to chain the decision logic — the “when X, do Y because Z” sequences that constitute the operational core of what the expert knows. It just has a long document full of things that may or may not be true, with no metadata to tell it which is which.

    This is why most knowledge capture projects fail to deliver on their promise. The content is there. The structure that makes it usable isn’t.

    A knowledge concentrate is the alternative. It is the distilled, structured artifact produced by the Human Distillery extraction protocol — smaller than a transcript, denser than any summary, and specifically formatted for the AI systems that will consume it.

    The Five Components of a Knowledge Concentrate

    1. The Entity Graph

    Every named concept, process, role, piece of equipment, regulation, and decision point that surfaces in extraction gets represented as a node. The edges between nodes are typed: causal, conditional, hierarchical, associative. The graph is not a list — it’s a map of relationships, and the relationships are the knowledge.

    An AI system with a list of entities knows vocabulary. An AI system with an entity graph knows how the domain works — how a change in one thing propagates to another, which concepts are upstream of which decisions, which relationships are conditional and which are structural.

    For a water damage restoration operation: the graph connects moisture readings to drying equipment selection to drying time estimates to invoice amounts to adjuster response patterns. None of those connections are in the documentation. All of them are in the head of a senior project manager who has run 400 jobs.

    2. Decision Logic

    The most directly usable component of the concentrate. Every when-then-because statement extracted from the session, structured as:

    • Condition: When this situation is present
    • Action: This is what we do
    • Because: This is why (the reasoning, not just the rule)
    • Exceptions: The cases where this breaks down
    • Confidence score: 0.0–1.0, based on how many independent sources confirmed it

    The “because” is what makes this different from a policy. A policy says do Y. A knowledge concentrate says do Y because Z, which means an AI system can recognize when Z is absent and adjust accordingly — rather than applying the rule in cases where the underlying condition that made the rule sensible doesn’t apply.

    The exceptions are equally important. Expert judgment is largely the accumulation of exceptions — the cases where the standard answer is wrong. Capturing those is the whole point of Layer 2 extraction.

    3. Benchmarks

    Every number that surfaces in extraction: thresholds, timelines, costs, rates, ratios, counts. Stored with context, source count, and variance.

    A benchmark from a single extraction session has low confidence. The same benchmark confirmed by six independent subjects in the same domain and market has high confidence and is ready to be used as ground truth in an AI system’s reasoning. The concentrate tracks the difference.

    This is the component that makes the concentrate valuable as a competitive intelligence product. The numbers in an industry that everyone knows but nobody has published — the real margin thresholds, the actual response time expectations, the price per square foot that experienced operators actually charge vs. what appears in public pricing guides — these exist only in people’s heads. The concentrate captures them with provenance.

    4. Tacit Signatures

    The things that are hard to explain. Captured as best as they can be verbalized, with a confidence flag.

    A tacit signature sounds like: “The drywall feels wrong before the moisture meter confirms it.” Or: “You can tell within the first five minutes of a call whether the adjuster is going to be cooperative or difficult, and it’s not anything specific they say.” These are not mysticism. They are pattern recognition operating below the level of conscious articulation — real knowledge that has never been verbalized because no one asked slowly enough.

    The confidence flag on tacit signatures signals to the consuming AI: this is approximate. This is the residue of knowledge the extraction process got close to but couldn’t fully surface. Don’t treat it as ground truth. Treat it as a signal that this is where human judgment is concentrated, and flag it for human review when it’s relevant.

    5. Provenance

    Traceable but anonymized. For every claim in the concentrate: how many independent sources confirmed it, what their roles were, what domain and market the data came from, and whether the claim is individual knowledge or cross-validated pattern.

    Provenance is what makes the concentrate auditable. An AI system that gives an answer based on a knowledge concentrate should be able to say: this answer comes from claim X, which was confirmed by three independent subjects with 10+ years of experience in this domain. That’s a very different epistemic standing than “I was trained on this.”

    The Density Test

    A useful heuristic for evaluating whether you have a transcript, a summary, or a true knowledge concentrate:

    A transcript contains everything that was said. It’s large, raw, and unstructured. An AI can search it but cannot reason from it efficiently.

    A summary contains the main points. It’s smaller. It has lost specificity, exceptions, confidence information, and relationships. It’s optimized for human reading, not AI consumption.

    A knowledge concentrate is smaller than the summary in tokens but larger in information. It contains relationships the summary dropped. It contains confidence scores the summary didn’t capture. It contains decision logic the summary flattened into assertions. An AI system can reason from it, not just retrieve from it.

    If what you have could be produced by someone reading a transcript and taking notes, it’s a summary. A knowledge concentrate requires the extraction protocol — it can only be produced from a session where the tacit layer was deliberately surfaced.


  • The Human Distillery: A Methodology for Extracting Tacit Knowledge for AI Systems

    The Human Distillery: A Methodology for Extracting Tacit Knowledge for AI Systems

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    Every organization has two kinds of knowledge. The documented kind — processes, policies, SOPs, training materials — lives in manuals and wikis. The other kind lives in people’s heads: the adjustments made without thinking, the thresholds learned from expensive mistakes, the pattern recognition that executes in a second but couldn’t survive a PowerPoint slide.

    The first kind is easy to feed into an AI system. The second kind is what makes the organization actually work. And it almost never gets captured before it walks out the door.

    This gap — between what’s written and what’s known — is where most enterprise AI implementations quietly fail. The system gets the documentation. It never gets the knowledge. The result is an AI that gives the same answer a new employee would give, while the 15-year veteran shakes their head and does it differently.

    The Human Distillery methodology exists to close that gap. It is a structured extraction protocol for converting tacit knowledge into dense, structured artifacts — books for bots — that AI systems can actually use. Not summaries. Not transcripts. Knowledge concentrates: information-rich artifacts that encode relationships, decision logic, and confidence alongside the facts themselves.

    This article is the methodology reference. It covers what tacit knowledge is and why it resists standard capture methods, the four-layer extraction protocol that surfaces it, the pivot signal lexicon that tells you when you’re close, what a knowledge concentrate looks like as a structured artifact, and where human judgment remains irreplaceable in the pipeline.


    Why Standard Methods Don’t Work

    The instinct when trying to capture organizational knowledge is to reach for one of three tools: a survey, an interview, or a documentation request. All three fail at tacit knowledge for the same reason: they ask people what they know. Tacit knowledge is knowledge people don’t know they know. It operates below the level of conscious articulation. You cannot survey it out of someone. You cannot ask them to write it down. You have to create the conditions under which it surfaces — and then recognize it when it does.

    Forms and surveys capture what people think they do. Conversations capture what they actually do and why. The difference between those two things is the entire product.

    A 20-year insurance adjuster asked “what’s your process for evaluating a water damage claim?” will give you the documented version: inspect the loss, review the policy, scope the damage, issue the estimate. This is accurate and useless. Ask them about a claim that went sideways and they will, unprompted, tell you that they always check the crawlspace first on older properties in this zip code because the contractor community there has a pattern of scope creep on foundation moisture that the initial inspection never catches. That’s the knowledge. It lives in the deviation from the process, not the process itself.


    The Four-Layer Descent

    The extraction protocol descends through four distinct layers in sequence. Each layer unlocks the next. Skipping a layer produces thin output. Rushing a layer produces performed output. The full descent, executed correctly, surfaces knowledge the subject didn’t know they were carrying.

    Phase 0: Disarmament

    Before any extraction begins, the status dynamic has to be neutralized. The subject needs to stop performing expertise for an evaluator and start explaining their world to a curious outsider. The difference in what comes out is dramatic.

    The disarmament move: position yourself as someone who genuinely doesn’t know. “I’ve never seen a job like this — walk me through it like I’m shadowing you.” This does two things. It forces explanation of steps the subject considers so obvious they wouldn’t otherwise mention — which is exactly where embedded knowledge concentrates. And it signals that there’s no correct answer being evaluated, which reduces the filtering that kills tacit knowledge capture.

    Open with failure. “Tell me about a job that went sideways” surfaces edge cases, exceptions, and judgment calls that success stories never reveal. People tell the truth in their failure stories. They’re not protecting anything.

    Layer 1: Surface Protocol

    The question: “What’s your process when X happens?”

    What it gets: The documented version. What the subject would write in an SOP. What they’d tell a new hire on day one. Accurate. Insufficient. Necessary baseline.

    Why you need it: The surface protocol establishes the frame. It’s the map. Everything that comes after is about finding where the territory diverges from the map — and those divergences are where the knowledge lives.

    Layer 2: Exception Probing

    The question: “When do you deviate from that?”

    What it gets: The adaptive layer. The judgment calls that experience produces. The cases where the checklist gets ignored because the situation demands something the checklist can’t accommodate. This is the first layer where genuine tacit knowledge begins to surface.

    The follow-up sequence: “And when does that happen?” → “How do you know it’s that situation?” → “What would you have done three years ago that you wouldn’t do now?” Each question peels back one more layer of accumulated judgment.

    Layer 3: Sensory and Somatic

    The question: “How do you know it’s that and not something else?”

    What it gets: Pattern recognition so ingrained it operates below conscious awareness. The knowledge the subject has never verbalized because no one has ever asked them to. This is the hardest layer to surface and the most valuable thing in the concentrate.

    What it sounds like: “The smell is different.” “The drywall feels wrong.” “Something about the way the insurance company rep is phrasing the emails.” These are not vague — they’re ultra-specific to a domain. The job is to slow down at these moments and press: “Describe the smell.” “What does wrong feel like compared to right?” “What in the phrasing specifically?” The subject usually thinks they can’t explain it. They can. They just haven’t been asked slowly enough.

    Layer 4: Counterfactual Pressure

    The question: “What would break if you weren’t here tomorrow?”

    What it gets: The knowledge hierarchy. What actually matters versus what’s ritual. Most organizations don’t know which is which until the person who knows leaves. This layer surfaces the load-bearing knowledge — the things that if absent would produce visible failures, not just suboptimal outcomes.

    The follow-up: “Who else knows that?” The answer is almost always “no one” or “maybe [one person].” That’s the knowledge risk. That’s also the product.


    The Pivot Signal Lexicon

    Proximity to tacit knowledge produces specific signals in conversation. Recognizing them in real time is the skill that separates a good extraction session from a great one. Miss these signals and you stay in Layer 1. Catch them and you descend.

    Signal What It Means The Move
    “It’s hard to explain…” The subject is about to verbalize something they have never articulated before. This is the most valuable signal in the lexicon. Slow everything down. “Try anyway.” Do not fill the silence. Do not offer a simpler question. Wait.
    “You just kind of know” Layer 3 boundary. The subject is pointing directly at tacit knowledge they don’t know how to surface. “Walk me through the last time you just knew. What did you notice first?”
    Hedging and qualifiers The subject is filtering. They have an answer but aren’t sure it’s acceptable to say. “Generally speaking…” “In most cases…” “It depends…” are all hedges. “Off the record — what actually happens?” Or: “What’s the version you’d tell a colleague vs. what you’d put in the manual?”
    Sudden energy or animation You’ve touched something they care about. The subject’s pace increases, their posture changes, they lean in. This is a live thread to a knowledge cluster. Follow it immediately. Drop the protocol. “Tell me more about that.” The protocol can resume. This thread may not come back.
    Deflection to process The subject is avoiding the judgment layer. When asked what they do, they tell you what the process says to do. Often accompanied by “the policy is…” or “we’re supposed to…” “But what do you do when that breaks down?” The emphasis on ‘you’ reframes the question from institutional to personal, which is where the knowledge actually lives.
    Pausing before a number The subject is calculating from experience, not retrieving from documentation. The pause is the gap between “what the spec says” and “what I know from doing this 200 times.” Ask for the number, then: “Where does that come from?” The answer to the second question is often the most valuable thing in the session.
    Unprompted stories The subject has moved from answering your questions to accessing their own knowledge map. Stories they tell without being asked are almost always pointing at something important. Let it run. If the story ends without the embedded knowledge surfacing, ask: “What made that one different from a normal job?”

    The Knowledge Concentrate: What the Output Actually Looks Like

    A transcript is raw. A summary is thinner in size but barely denser in information. A knowledge concentrate is smaller than either and more information-rich than both — because it encodes relationships, decision logic, and confidence alongside the facts themselves.

    The schema for a knowledge concentrate has five components:

    Entity graph. Every named concept, process, person-role, piece of equipment, and decision point that surfaces in the extraction, mapped as nodes with typed edges between them. Not a list — a graph. The relationships are the knowledge. The entities alone are just vocabulary.

    Decision logic. Every when-then-because statement extracted from the session. “When the moisture readings are above X in a crawlspace with Y flooring type, we always do Z because A.” Structured with confidence scores: is this firsthand knowledge, observed pattern, or secondhand information?

    Benchmarks. Every number that surfaces in extraction — thresholds, timelines, costs, rates, counts — with context, source count, and variance. A benchmark from one interview has low confidence. The same benchmark confirmed across six interviews in the same market has high confidence and is ready to be used as ground truth.

    Tacit signatures. The things that are hard to explain — captured as best as they can be verbalized, with a confidence flag that signals to the AI system consuming them: this is approximate. This is the residue of knowledge that the extraction process got close to but couldn’t fully surface. It’s still valuable. It tells the AI where human judgment is concentrated.

    Provenance. Traceable but anonymized. How many sources contributed to each claim. Whether a given piece of knowledge is individual or cross-validated. What industry and market it came from.

    An AI system consuming a knowledge concentrate in this format doesn’t just know facts — it knows which facts to trust, how to chain them into decisions, and where the knowledge is thin enough that human judgment should be called in.


    What the App Can Do and What It Can’t

    The four-layer protocol and the pivot signal lexicon can be partially codified. A stateful conversational agent — not a chatbot, a genuinely stateful system that maintains a running knowledge map of what’s been surfaced and what’s still needed — can execute the question sequences, detect linguistic pivot signals, navigate domain-specific question libraries, and run the processing pipeline from transcript to structured concentrate.

    What it cannot do is the thing that makes the difference between a good extraction and a complete one:

    It cannot read the half-second of hesitation before an answer that signals the subject knows more than they’re about to say. It cannot decide, in the middle of an unprompted story, that this tangent is the most important thing in the session and the protocol should be abandoned to follow it. It cannot calibrate trust — cannot sense whether the subject is performing for the recording or actually sharing, and adjust accordingly. It cannot distinguish a valuable tangent from genuine noise in real time.

    These are not gaps that better models will close. They are inherently relational and embodied. They require a human who is genuinely present in the conversation, not processing a transcript of it.

    The honest architecture for a distillery operation is therefore tiered. The app handles extraction volume — the sessions where the knowledge is relatively accessible, the domain is well-mapped, and the question library is sufficient. The human handles the sessions where the stakes are highest, the subject is guarded, or the knowledge being sought is at the outer edge of what can be verbalized. And the human is always the quality gate on the final concentrate, regardless of which path produced it.


    Why This Works in Any Industry

    Tacit knowledge is not a property of any particular field. It is a property of human expertise at depth. Wherever humans have been doing something long enough to develop judgment that exceeds documentation — which is everywhere — the distillery protocol applies.

    The domain changes the question library. The pivot signals are universal. The four-layer structure works in restoration, in legal practice, in medicine, in financial services, in manufacturing, in competitive sports coaching, in culinary production. Any field where experience produces something that training cannot replicate is a field where a knowledge concentrate has value.

    The buyers are the organizations trying to make that knowledge portable. The AI system that needs to give the same answer a 20-year veteran would give. The consultant whose insights live only in their head. The franchise trying to replicate the judgment of its best operators across 400 locations. The company that just lost its most important employee and is only now discovering what they actually knew.

    The product is not content. It is not a report. It is a structured knowledge artifact that makes someone else’s irreplaceable expertise replicable — at least partially, at least for the cases the documentation currently handles worst.

    That’s the distillery. Extract. Distill. Deploy.


    Frequently Asked Questions

    How long does a single extraction session take?

    A full four-layer descent with one subject takes 60–90 minutes. Rushing below 45 minutes consistently produces shallow output — the session ends before Layer 3 is reached. Three to five sessions with different subjects in the same domain produces a concentrate with enough cross-validation to have meaningful confidence scores on the decision logic and benchmarks.

    What industries is this most applicable to?

    Any industry where experience produces judgment that documentation can’t replicate. The highest-value applications are in fields with expensive mistakes (medical, legal, engineering), fields with long apprenticeship periods (skilled trades, finance, consulting), and fields where the knowledge is currently locked in one or two people (most small and mid-size businesses).

    How is this different from a McKinsey-style knowledge management engagement?

    Traditional knowledge management captures process documentation — what should happen. The distillery protocol captures judgment documentation — what actually happens, and why, and when the standard answer is wrong. The output is structured for AI consumption, not human reading. The concentrate is designed to be queried, not read.

    What happens to the concentrate after it’s produced?

    The concentrate is delivered to the client for ingestion into their AI infrastructure — as a RAG knowledge base, as fine-tuning data, as a reference layer for their AI assistant, or as structured context for their customer-facing AI systems. The format is designed to be immediately usable without further transformation. The provenance metadata ensures the client knows which claims to trust at what confidence level.

    Can the extraction protocol be deployed without a trained human interviewer?

    Partially. A well-built stateful conversational agent can execute the question sequences, detect linguistic pivot signals, and run the processing pipeline. What it cannot do is the real-time relational judgment that surfaces the deepest knowledge — the hesitation reading, the trust calibration, the decision to abandon the protocol and follow an unexpected thread. For accessible knowledge in well-mapped domains, the app is sufficient. For the knowledge closest to the surface of human expertise, the human remains in the loop.


  • Four-Layer Data Architecture: Building Around Behaviors, Not Tools

    Four-Layer Data Architecture: Building Around Behaviors, Not Tools

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    The instinct, when building a complex operation, is to find one tool that can hold everything. One source of truth. One dashboard. One system of record for all data types.

    This instinct is wrong, and it produces exactly the kind of system it’s trying to avoid: a single tool that does everything poorly, a migration project that costs more than the original implementation, and a team that has learned to distrust the data because the tool was never designed for the behaviors it was forced to support.

    The behavior-first alternative for data architecture doesn’t start with “what tool can hold everything.” It starts with: what are the distinct behaviors this data needs to support, and which tool is genuinely best suited for each one?

    The Four Data Behaviors

    In a multi-site AI-native content operation, four distinct data behaviors emerge:

    Machine-generated operational data needs to be written and read by automated systems at high speed. Batch job results, embedding vectors, image processing logs, Cloud Run execution histories. No human looks at this data directly. It needs to be fast, cheap, and structured for programmatic access. GCP serves this behavior — Firestore for structured operational state, Cloud Storage for large artifacts, BigQuery for analytical queries across the full dataset.

    Human-actionable signals need to be displayed clearly enough that a person can take action without wading through noise. Site health alerts, content gaps, client status changes, task assignments. This data needs to be readable, filterable, and connected to the people who need to act on it. Notion serves this behavior — not because it’s the most powerful database, but because it’s the most human-readable one, with views that can surface exactly the signal each role needs.

    Published content needs to be delivered to web visitors and search engines at performance standards those audiences require. WordPress serves this behavior. It was designed for it. The mistake is asking WordPress to also serve as the storage layer for unpublished content, the analytics layer for content performance, or the task management layer for content production. It wasn’t designed for those behaviors and it’s not good at them.

    Files and documents need to be stored, versioned, and shared across tools and collaborators. Google Drive serves this behavior. Skills, SOPs, brand guidelines, exported data — anything that exists as a file rather than as structured data belongs in Drive, not in a database trying to handle file attachments as a secondary feature.

    Why Separation Produces Better Systems

    A four-layer architecture feels like more complexity than a single-tool approach. In practice it produces less complexity, because each tool is operating within its design constraints instead of being stretched beyond them.

    The signal-to-noise problem in most dashboards comes from forcing machine-generated data and human-actionable signals into the same view. The machine data overwhelms the human signals. The solution is usually “better filtering” — which is the wrong answer. The right answer is storing machine data where machines can read it and surfacing human signals where humans can act on them.

    The performance problem in most content operations comes from asking WordPress to be a content management system when it’s a content delivery system. The content that belongs in a CMS — drafts, revisions, briefs, research notes — should be in Notion. The content that belongs in a CDS — published articles, page templates, media files — should be in WordPress. When you separate these, both tools perform their actual function better.

    The data loss problem in most operations comes from treating the most convenient tool as the system of record. When content lives only in WordPress, a site failure is a data failure. When operational state lives only in a Cloud Run service, a deployment change is a state failure. The four-layer architecture ensures that each data type has a permanent home in the tool designed to hold it — and that the tools interact through APIs rather than through manual migration.


  • ADHD and AI-Native Operations: Designing Around the Behavior, Not Against It

    ADHD and AI-Native Operations: Designing Around the Behavior, Not Against It

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    The conventional wisdom about ADHD and work is built around a simple premise: the ADHD brain is deficient in the behaviors that work requires, and management strategies exist to compensate for those deficiencies. More structure. Better schedules. Accountability systems. Tools designed to impose the consistency the brain doesn’t generate naturally.

    This is tool-first thinking applied to a human brain. And like most tool-first thinking, it produces systems that fight the behavior instead of serving it.

    The behavior-first alternative asks a different question: what does the ADHD brain actually do, at its best, and what system design would allow it to do more of that?

    What the ADHD Brain Actually Does

    Three behaviors characterize high-functioning ADHD cognition when the environment supports them:

    Hyperfocus. Sustained, intense concentration that arrives unbidden and runs at extraordinary depth for an unpredictable duration. Not concentration on demand — concentration that seizes the operator when a problem activates the interest system. The output of a hyperfocus session is disproportionate to the time invested, and the quality often exceeds what deliberate, scheduled work produces.

    Interest-based attention routing. The ADHD attention system allocates based on interest, novelty, urgency, or challenge — not importance. High-interest work gets exceptional focus. Low-interest work gets almost none. This is not a failure of will. It’s a feature of a different attentional architecture.

    Cross-domain pattern recognition. Rapid context-switching, which looks like distractibility in sequential-task environments, produces something valuable in environments that reward synthesis: the ability to connect observations across unrelated domains and identify patterns that single-domain experts miss.

    The System That Serves These Behaviors

    An AI-native operation designed around these behaviors looks different from a conventional productivity system:

    For hyperfocus: The system captures whatever the hyperfocus session produces — immediately, in full, without requiring the operator to organize it mid-session. The Second Brain stores the output. The cockpit session for the next day picks up the thread. The non-linearity of hyperfocus (jumping between connected insights, building in spirals) becomes productive because the AI can hold the full context of the spiral across sessions.

    For interest-based attention: Low-interest, deterministic work routes to automated pipelines. Haiku runs taxonomy fixes at scale. Cloud Run handles scheduled publishing. Batch jobs process a hundred posts while the operator is doing something that has activated their interest system. The attention that would have been coerced onto low-interest work is freed for the high-interest work where ADHD attention genuinely excels.

    For pattern recognition: The cross-domain synthesis that ADHD cognition produces naturally — connecting a restoration industry CRM insight to an AI architecture principle to a neurodiversity research finding — is exactly what generates the novel frameworks that constitute a knowledge operation’s core asset. This isn’t compensated for. It’s the product.

    The Architecture Principle

    The systems that emerged from designing around ADHD constraints are not ADHD-specific. They are better systems. External working memory (the Second Brain) outperforms internal working memory for complex multi-client operations regardless of neurology. Routing low-value-attention work to automation is better for any operator. Pre-staged context reduces friction for everyone.

    The ADHD constraints forced designs that a neurotypical operator would also benefit from — because the constraints that neurodivergence makes extreme are present in milder form in everyone. The behavior-first design process, applied to an ADHD brain, produced infrastructure. The same process, applied to any operation, produces the same result: systems that serve the actual behavior, compound over time, and don’t require the operator to fight their own cognition to function.