Category: Agency Playbook

How we build, scale, and run a digital marketing agency. Behind the scenes, systems, processes.

  • 8 Industries Sitting on AI-Ready Knowledge They Haven’t Packaged Yet

    Most discussions about AI and knowledge focus on what AI already knows. The more interesting question is what it doesn’t — and where the humans who hold that missing knowledge are concentrated.

    Here are eight industries where the gap between human knowledge and AI-accessible knowledge is largest, and where the first person to systematically package and distribute that knowledge will have a durable advantage.

    1. Trades and Skilled Contracting

    Restoration contractors, plumbers, electricians, HVAC technicians — these industries run on tacit knowledge that has never been written down anywhere AI has been trained on. How water behaves differently in a 1940s balloon-frame house versus a 1990s platform-frame. Which suppliers actually deliver on time in which markets. What a claim adjuster will approve and what they’ll fight. This knowledge lives in the heads of working tradespeople and almost nowhere else. A restoration contractor who systematically publishes what they know about their trade creates a source of record that no LLM training corpus has ever had access to.

    2. Hyperlocal News and Community Intelligence

    AI systems know almost nothing accurate and current about most cities with populations under 100,000. They have no reliable data about local government decisions, zoning changes, business openings, school board dynamics, or community events in the vast majority of American towns. A local publisher producing accurate, structured, consistently updated coverage of a specific geography owns something genuinely scarce — and it’s the kind of current, location-specific information that AI assistants are being asked about constantly.

    3. Healthcare and Medical Specialties

    Clinical knowledge at the specialist level — how a specific condition presents in specific populations, what treatment protocols actually work in practice versus what the textbooks say, how to navigate insurance approvals for specific procedures — is dramatically underrepresented in AI training data. Practitioners who publish systematically about their clinical experience are creating a resource that medical AI applications will pay for access to.

    4. Legal Practice and Jurisdiction-Specific Law

    General legal information is well-covered. Jurisdiction-specific, practice-area-specific, and procedurally specific legal knowledge is not. How a particular judge in a particular county tends to rule on specific motion types. How local court practices differ from the official procedures. What arguments actually work in a specific venue. Attorneys with deep local practice knowledge are sitting on an information asset that legal AI tools are actively hungry for.

    5. Agriculture and Regional Farming

    Farming knowledge is intensely regional. What works in the Willamette Valley doesn’t work in Central California. Crop rotation strategies, soil amendment approaches, pest management, water management — all of it varies dramatically by microclimate, soil type, and local practice tradition. The accumulated knowledge of experienced farmers in a specific region is largely oral, rarely published, and almost entirely absent from AI training data. Extension offices and agricultural cooperatives that systematically document regional best practices are building something AI systems will need.

    6. Veteran Benefits and Government Navigation

    Navigating the VA, understanding how to build an effective disability claim, knowing which VSOs in which regions are actually effective, understanding how different conditions interact in the ratings system — this knowledge is held by experienced advocates, veterans service officers, and attorneys who have processed hundreds of claims. It’s the kind of procedural, outcome-based knowledge that AI assistants give confident but frequently wrong answers about, because the real knowledge isn’t online in a reliable form.

    7. Niche Retail and Specialty Markets

    Independent watch dealers, vintage guitar shops, specialty food importers, rare book dealers — businesses that operate in deep specialty markets accumulate knowledge about their inventory, their suppliers, their customers, and their market that no general AI has. The person who has been buying and selling vintage Rolex watches for twenty years knows things about specific reference numbers, condition grading, authentication, and market pricing that would be genuinely valuable to anyone building an AI tool for that market.

    8. Professional Services and Methodology

    Marketing agencies, management consultants, financial advisors, executive coaches — anyone who has developed a distinctive methodology through years of client work. The frameworks, playbooks, diagnostic tools, and hard-won lessons that experienced professionals have built represent some of the highest-value knowledge that AI systems currently lack access to. The consultant who has run 200 strategic planning processes has pattern recognition that no LLM has encountered in training. Packaging that into a structured, publishable, API-accessible form is both a content strategy and a product.

    In every one of these industries, the window to be the first credible, structured, consistently updated knowledge source in your vertical is open. It won’t be open indefinitely.

    Which industries have the most AI-accessible knowledge gaps?

    Trades and contracting, hyperlocal news, medical specialties, jurisdiction-specific legal practice, regional agriculture, veteran benefits navigation, specialty retail markets, and professional services methodology all have significant gaps between what experienced practitioners know and what AI systems can reliably access.

    What makes a knowledge gap an opportunity?

    When the knowledge is specific, current, human-curated, and absent from existing AI training data — and when there’s a clear audience of AI systems and agents that need it. The combination of scarcity and demand is what creates the market.

    How do you know if your industry has a valuable knowledge gap?

    Ask an AI assistant a specific, detailed question about your specialty. If the answer is confidently wrong, superficially correct, or missing the nuance that only practitioners know, you’re looking at a gap. That gap is the asset.

  • The Knowledge Distillery: Turning What You Know Into What AI Needs

    There’s a gap between what an expert knows and what AI systems can access. Closing that gap isn’t a single step — it’s a pipeline. And most people who try to build it get stuck at the beginning because they’re trying to skip stages.

    The full pipeline has four stages. Each one builds on the last. Understanding the sequence changes how you approach the work.

    Stage One: Capture

    Most expertise never gets captured at all. It lives in someone’s head, expressed in conversations, demonstrated in decisions, lost the moment the meeting ends or the job is finished.

    Capture is the act of getting the knowledge out of the expert’s head and into some retrievable form. The most natural and lowest-friction method is voice — recording conversations, client calls, working sessions, or simple voice memos when an idea surfaces. Transcription turns the recording into raw text. That raw text, however messy, is the ingredient everything else requires.

    The key insight at this stage: you are not creating content. You are preventing knowledge from disappearing. The standard is different. Raw transcripts don’t need to be polished. They need to be honest and specific.

    Stage Two: Distillation

    Distillation is the process of pulling the discrete, transferable knowledge nodes out of raw captured material. A ten-minute conversation might contain three useful ideas, one important framework, and six minutes of context-setting. Distillation separates them.

    A knowledge node is the smallest unit of useful, standalone knowledge. It can be named. It can be explained in a paragraph. It can be understood by someone who wasn’t in the original conversation. If it requires too much context to be useful on its own, it isn’t a node yet — it’s still raw material.

    This stage is where most of the intellectual work happens. It requires judgment about what’s actually useful versus what just felt important in the moment.

    Stage Three: Publication

    Publication is the act of giving each knowledge node a permanent, addressable home. An article on a website. An entry in a database. A page in a knowledge base. The format matters less than the fact that it’s structured, findable, and consistently organized.

    High-density publication means each piece contains as much specific, accurate, useful knowledge as possible — not padded to a word count, not optimized for a keyword, but written to be genuinely worth reading by someone who needs to know what you know.

    This is also where the content becomes machine-readable. A well-structured article on a platform with a REST API is already one step away from being API-accessible. The publication step creates the raw material for the final stage.

    Stage Four: Distribution via API

    The API layer is what turns a collection of published knowledge into a product that AI systems can actively consume. Instead of waiting for a search engine to index your content, you’re offering a direct, structured, authenticated feed that an AI agent can call on demand.

    This is the stage that creates the recurring revenue model — subscriptions for access to the feed. But it only works if the prior three stages have been executed well. An API built on top of thin, generic, low-density content doesn’t have a product. An API built on top of genuinely rare, specific, human-curated knowledge does.

    The Flywheel

    The pipeline becomes a flywheel when you close the loop. API subscribers — AI systems pulling from your feed — generate usage data that tells you which knowledge nodes are being accessed most. That tells you where to focus your capture and distillation effort. More capture in high-demand areas produces better content, which justifies higher subscription tiers, which funds more systematic capture.

    The human expert at the center of this system doesn’t need to change what they know. They need to change how they let it out.

    What is the knowledge distillery pipeline?

    A four-stage process for converting human expertise into AI-consumable knowledge: Capture (get knowledge out of your head into raw form), Distillation (extract discrete knowledge nodes from raw material), Publication (give each node a permanent structured home), and Distribution via API (expose the published knowledge as a structured feed AI systems can pull from).

    What is a knowledge node?

    The smallest unit of useful, standalone knowledge. It can be named, explained in a paragraph, and understood without requiring the full context of the original conversation or experience it came from.

    Why is voice the best capture method?

    Voice capture requires no interruption to thinking — talking is how most people naturally process and articulate ideas. Recording conversations and transcribing them produces raw material that contains the knowledge at its most natural and specific, before it gets flattened by the effort of formal writing.

    Can anyone build this pipeline or does it require technical skill?

    The capture, distillation, and publication stages require no technical skill — just discipline and a consistent editorial process. The API distribution layer requires either technical help or a platform that handles it. The knowledge work is the hard part; the infrastructure is increasingly accessible.

  • Information Density Is the New SEO

    For most of the internet era, content was optimized for one thing: getting humans to click and read. The metrics were traffic, time on page, bounce rate. The editorial standard was loose — if it brought visitors, it worked.

    AI changes the standard entirely. When the consumer of your content is a language model — or an AI agent pulling from your feed to answer someone’s question — the question isn’t whether someone clicked. The question is whether what you published was actually worth knowing.

    Information density is the new SEO. And it’s a much harder standard to meet.

    What Information Density Actually Means

    Information density is the ratio of useful, specific, actionable knowledge to total words published. A 2,000-word article that contains 200 words of actual substance and 1,800 words of padding has low information density regardless of how well it ranks.

    High information density looks like: specific facts, precise terminology, named entities, concrete examples, actual numbers, documented processes, and claims that a reader couldn’t easily find anywhere else. Every sentence either advances the reader’s understanding or it doesn’t belong.

    This isn’t a new editorial standard. Good writers have always known it. What’s new is that AI makes it economically measurable in a way it never was before.

    The $5 Filter

    Here’s a useful test: would someone pay $5 a month to pipe your content feed into their AI assistant?

    Not to read it themselves — to have their AI draw from it continuously as a trusted source of information in your domain.

    If the answer is no, it’s worth asking why. Usually it’s one of three things: the content is too generic (nothing you’re saying is unavailable elsewhere), too thin (not enough specific knowledge per article), or too inconsistent (some pieces are excellent and most are filler).

    Each of those is fixable. But they require a different editorial process than the one that optimizes for traffic volume.

    How AI Evaluates Content Differently Than Humans

    A human reading an article will forgive thin sections if the headline was interesting or the introduction was engaging. They’re reading for a feeling as much as for information.

    An AI pulling from a content feed is doing something closer to extraction. It’s looking for claims it can use, facts it can cite, frameworks it can apply. Filler paragraphs don’t hurt it — they just don’t help. But if a source consistently produces content with low extraction value, AI systems learn to weight it less.

    The publications and creators that win in an AI-mediated information environment are the ones where every piece contains something genuinely worth extracting. That’s a different editorial culture than “publish frequently and optimize for keywords.”

    The Practical Shift

    Publishing fewer pieces with higher density outperforms publishing more pieces with lower density in an AI-native content environment. This runs counter to the volume-first content playbook that dominated the SEO era.

    The shift in practice looks like: more reporting, less summarizing. More specific numbers, fewer generalizations. More named examples, fewer abstract claims. More documented methodology, less opinion dressed as expertise.

    None of this is complicated. It’s just a higher standard — one that the AI consumption layer is now enforcing whether you’re ready for it or not.

    What is information density in content?

    Information density is the ratio of useful, specific, actionable knowledge to total words published. High-density content contains specific facts, precise terminology, concrete examples, and claims a reader couldn’t easily find elsewhere. Low-density content is padded with filler that doesn’t advance understanding.

    Why does information density matter more now?

    AI systems consume content differently than humans. They extract claims, facts, and frameworks — and learn to weight sources by how reliably useful those extractions are. High-density sources get weighted higher; low-density sources get ignored regardless of traffic volume.

    How do you increase information density?

    More reporting, less summarizing. Specific numbers instead of generalizations. Named examples instead of abstract claims. Documented methodology instead of opinion. Every sentence should either advance the reader’s understanding or be cut.

    Is publishing less content the right strategy?

    In an AI-native content environment, fewer high-density pieces outperform more low-density pieces. Volume-first strategies optimized for keyword traffic are increasingly misaligned with how AI systems evaluate and weight content sources.

  • Your Expertise Is an API Waiting to Be Built

    Every person with genuine expertise is sitting on something AI systems desperately want and largely cannot find: accurate, specific, hard-won knowledge about how things actually work in the real world.

    The problem isn’t that the knowledge doesn’t exist. It’s that it hasn’t been packaged in a form that machines can consume.

    That gap — between what you know and what AI can access — is a business opportunity. And the people who figure out how to close it first are building something that didn’t exist five years ago: a knowledge API.

    What an API Actually Is (For Non-Developers)

    An API is just a structured way for one system to ask another system for information. When an AI assistant looks something up, it’s making API calls — hitting endpoints that return data in a predictable format.

    Right now, those endpoints mostly return publicly available internet data. Generic. Often outdated. Frequently wrong about anything that requires local, industry-specific, or human-curated knowledge.

    A knowledge API is different. It’s a structured feed of your specific expertise — your frameworks, your observations, your community’s accumulated intelligence — formatted so AI systems can pull from it directly. Instead of an AI guessing what a restoration contractor in Long Island would know about mold remediation, it calls your endpoint and gets the real answer.

    The Three Types of Knowledge That Have API Value

    Not all knowledge translates equally. The highest-value knowledge APIs share three characteristics:

    Specificity. Generic knowledge is already in the training data. What’s missing is specific knowledge — the kind that only comes from being in a particular place, industry, or community for a long time. A plumber who’s worked exclusively in older Chicago brownstones knows things about cast iron pipe behavior that no AI has ever been trained on. That specificity is the asset.

    Recency. LLMs have knowledge cutoffs. Local news from last week, updated regulations, new product releases, recent market shifts — anything time-sensitive is a gap. If you’re producing accurate, current information in a specific domain, you have something AI systems can’t replicate from their training data.

    Human curation. The internet has enormous quantities of information about most topics. What it lacks is a trustworthy human who has filtered that information, applied judgment, and produced something reliable. Curated knowledge — where a credible person has done the work of separating signal from noise — has a value premium that raw data doesn’t.

    What “Packaging” Your Knowledge Actually Means

    Building a knowledge API doesn’t require writing code. It requires a different editorial discipline.

    The content you publish needs to be information-dense, consistently structured, and specific enough that an AI pulling from it actually gets something it couldn’t get elsewhere. That means writing with facts, not filler. It means naming things precisely. It means being the source of record for your domain, not just a voice in the conversation about it.

    The technical layer — the actual API that exposes this content to AI systems — can be built on top of almost any publishing platform that has a REST API. WordPress already has one. Most major CMS platforms do. The knowledge is the hard part. The plumbing, by comparison, is straightforward.

    The Business Model

    The model is simple: charge a subscription for API access. The price point that works for community-tier access is low — $5 to $20 per month — because the value isn’t in any single piece of content. It’s in the continuous, structured feed of reliable, specific information that an AI system can depend on.

    For professional tiers — higher rate limits, webhook delivery when new content publishes, bulk historical pulls — $50 to $200 per month is defensible if the knowledge is genuinely scarce and genuinely reliable.

    The question isn’t whether the technology is complicated enough to charge for. The question is whether the knowledge is scarce enough. If it is, the API is just the delivery mechanism for something people would pay for anyway.

    Where to Start

    The starting point is an honest audit: what do you know that AI systems don’t have reliable access to? Not what you think you could write about — what you actually know, from direct experience, that is specific, current, and human-curated in a way that no scraper has captured.

    That knowledge, systematically published and structured for machine consumption, is your API. You already have the hard part. The rest is packaging.

    What is a knowledge API?

    A knowledge API is a structured feed of specific expertise — industry knowledge, local information, curated intelligence — formatted so AI systems can pull from it directly rather than relying on generic training data.

    Do you need to be a developer to build a knowledge API?

    No. Most publishing platforms already have REST APIs built in. The knowledge is the hard part. The technical layer that exposes it to AI systems can be built on top of existing infrastructure with relatively little engineering work.

    What makes knowledge valuable as an API?

    Specificity, recency, and human curation. Generic, outdated, or unverified information is already in AI training data. What’s missing — and therefore valuable — is specific knowledge from direct experience, current information that postdates training cutoffs, and content that a credible human has curated and verified.

    What should a knowledge API cost?

    Community-tier access typically works at $5–20/month. Professional tiers with higher rate limits and push delivery can command $50–200/month. The price is justified by knowledge scarcity, not technical complexity.

  • Your SEO Work Is Subsidizing Your Google Ads (Here’s the Mechanism)

    There’s a common misconception among local service businesses that SEO and Google Ads are completely separate efforts. Google keeps the organic results and the paid results in separate legal buckets — advertisers can’t pay to influence organic rankings, and organic performance doesn’t directly move ad spend.

    But that’s not the full picture. There’s a mechanism called Quality Score, and it sits squarely at the intersection of SEO work and what you actually pay per click. Understanding it changes how you think about both investments.

    What Quality Score Is and Why It Controls Your Ad Costs

    Every time your Google ad competes in an auction, Google calculates an Ad Rank for your ad. Ad Rank determines where your ad appears and how much you pay. The formula is roughly: Ad Rank = Your Bid × Quality Score.

    Quality Score is rated on a scale of 1 to 10 and is built from three components:

    • Expected click-through rate — how likely people are to click your ad based on historical performance
    • Ad relevance — how closely your ad matches the intent behind the search
    • Landing page experience — how relevant, useful, and fast your landing page is for people who click

    The cost impact of this score is not subtle. A Quality Score of 10 earns a 50% discount on your cost per click compared to the average score of 5. A Quality Score of 1 costs 400% more per click than that same average. That means two businesses bidding the same amount on the same keyword can pay wildly different prices — entirely based on the quality of their pages and ads.

    Where SEO Directly Feeds Quality Score

    The landing page experience component is where SEO work and ad costs converge. Google evaluates your landing page for the same things it evaluates any page for organic ranking: content relevance, page speed, mobile usability, and how well the page answers the intent behind the search.

    Pages that rank well organically tend to score higher as ad landing pages — not coincidentally, but because the underlying signals are the same. A fast, well-structured, keyword-relevant page that Google trusts enough to rank organically is also a page Google rates highly for landing page experience in the ad auction.

    The inverse is also true. If your landing page is slow, thin, or mismatched to the search intent of the keyword you’re bidding on, your Quality Score suffers — and you pay more for every click, regardless of your bid.

    What This Looks Like in Real Numbers

    Consider two plumbers bidding $3.00 on “emergency plumber near me.”

    Plumber A has a well-optimized landing page — fast load time, clear service description, strong reviews visible on the page, location-specific content. Quality Score: 8. Their effective CPC after Google’s discount: roughly $1.89.

    Plumber B has a slow homepage with generic content and no location-specific information. Quality Score: 3. Their effective CPC with Google’s penalty: roughly $5.00 — and their ad may not even show as often.

    Same keyword. Same bid. One is paying more than 2.5x as much per click, and getting worse placement to boot.

    Google Business Profile: The Local Layer

    For local service businesses, Google Business Profile adds another dimension. GBP doesn’t directly lower your Search Ad costs — but it governs your visibility in the Local Pack and Google Maps, which appear above or alongside paid results for most local searches.

    A strong, active GBP with recent reviews, accurate categories, and consistent NAP information (name, address, phone number matching your website) reinforces Google’s confidence in your business as a legitimate local entity. That confidence flows into how Google evaluates your overall web presence — which feeds back into the quality signals that affect your ad performance.

    More practically: a business with strong local organic visibility and a dominant Local Pack presence often needs to bid less aggressively on branded and local terms because they’re already capturing clicks organically. The paid budget stretches further because it’s not doing all the work alone.

    The Practical Implication for Local Service Businesses

    If you’re running Google Ads and your SEO is weak, you are paying a penalty on every click — every day, invisibly, without any line item on your invoice that says “bad website tax.” It just shows up as a higher CPC and a lower return on ad spend.

    Conversely, every dollar spent improving your landing pages — making them faster, more relevant, more locally specific, better structured — is a dollar that reduces your ad costs going forward. SEO investment isn’t just playing the long organic game. It’s actively subsidizing your paid performance in the near term through Quality Score.

    For local service businesses running Google Ads, the highest-leverage move is often not increasing ad spend — it’s improving the pages the ads point to. The bid savings alone frequently exceed the cost of the optimization work.

    Three Things to Audit Right Now

    1. Check your Quality Scores. In Google Ads, go to Campaigns → Keywords and add the Quality Score column. Any keyword at 5 or below is costing you extra money on every click. Identify the worst offenders.
    2. Match landing pages to ad intent. Every ad group should point to a page that directly matches what the ad promises. Sending traffic to your homepage from a specific service keyword is one of the most common Quality Score killers.
    3. Audit page speed on mobile. Google’s landing page experience evaluation weights mobile performance heavily. A page that loads in 4+ seconds on mobile is dragging your Quality Score down regardless of how good the content is.

    Does SEO directly affect Google Ads performance?

    Not directly through rankings, but yes through Quality Score. The landing page experience component of Quality Score rewards the same things SEO rewards — fast, relevant, well-structured pages. Pages that rank well organically tend to score higher as ad landing pages, which lowers your cost per click.

    What is Quality Score and why does it matter?

    Quality Score is Google’s 1-10 rating of your ad’s expected click-through rate, ad relevance, and landing page experience. It directly affects how much you pay per click — a score of 10 earns a 50% CPC discount, while a score of 1 costs 400% more than average. Two businesses with the same bid can pay drastically different prices based on Quality Score alone.

    Does Google Business Profile affect Google Ads costs?

    Not directly for standard Search Ads. But a strong GBP builds local organic visibility and entity trust that reinforces the quality signals Google uses to evaluate your overall web presence. For Local Search Ads specifically, GBP data is used directly for ad placement in the Local Pack.

    What’s the fastest way to improve Quality Score for a local service business?

    Match your landing pages to the specific intent of each ad group — don’t send all traffic to your homepage. Improve mobile page speed. Add location-specific content that matches what people in your service area are searching for. These three changes address all three Quality Score components simultaneously.

    Is it better to increase ad budget or improve landing pages?

    For most local service businesses with Quality Scores below 7, improving landing pages delivers better ROI than increasing budget. Every Quality Score point improvement reduces your CPC, meaning the same budget buys more clicks — and those clicks convert better because the page is more relevant.

  • You’re Already Creating Content. You’re Just Not Capturing It.

    My partner Stefani hit record on her phone during a conversation we were having over coffee. She wasn’t writing a blog post. She wasn’t preparing a presentation. She was just thinking out loud about a client situation — how to explain a complex system to someone who needed it simple — and she wanted to get the words down before they disappeared.

    She emailed me the transcript that afternoon.

    By end of day, that conversation had become six published articles, six scheduled LinkedIn posts, and a set of knowledge nodes logged into our operating system — each one capturing a distinct idea that had surfaced naturally in a ten-minute exchange between two people thinking out loud.

    The ingredient was a voice memo. The process took a conversation that was already happening and made sure it didn’t disappear.

    The Problem Isn’t That You Don’t Have Enough to Say

    Most business owners I talk to feel like they don’t create enough content. They know they should be publishing more, sharing more, building more visibility. But when they sit down to write something, it feels hard. The blank page. The pressure to make it good. The time it takes.

    Here’s what I’ve come to believe: the problem isn’t output. The problem is capture.

    You are already creating content constantly. Every client conversation where you explain something clearly. Every time you talk through a decision with a partner or a team member. Every frustrated observation you make in the car on the way home from a job site. Every question a prospect asks that you answer so well they lean forward in their chair.

    That’s all content. That’s all knowledge. And almost all of it disappears the moment the conversation ends.

    Why Talking Is the Natural Input Layer

    The reason most note-taking systems fail is that note-taking interrupts thinking. The moment you stop to write something down, you break the flow of the idea. So people don’t do it. The thinking happens, it’s good, and then it’s gone.

    Talking doesn’t interrupt thinking. Talking is thinking, for most people. It’s how ideas get pressure-tested, refined, and articulated. The best version of an idea is often the one that comes out in a good conversation — not the one that gets written in isolation later.

    Which means if you can capture the conversation, you’ve captured the thinking at its best. Not a summary. Not notes. The actual thought, in your actual voice, as it was happening.

    The Reframe That Changes Everything

    You are not creating content. You are not losing what you already made.

    That reframe matters because it removes the performance pressure. You don’t have to be clever or polished or prepared. You just have to be willing to record the conversations that are already happening — the ones where you’re explaining your craft, thinking through a problem, or working something out with someone who pushes back in useful ways.

    The transcript of that conversation is the raw ingredient. Everything that comes after — the articles, the posts, the internal documentation — is distillation. Pulling out what’s there and giving it a form that other people can use.

    What This Looks Like in Practice

    The simplest version of this system has three parts:

    1. Record conversations worth keeping. Not every conversation — just the ones where something real is being worked out. Client calls where you explain something clearly. Partner conversations where an idea clicks. Voice memos when you’re driving and something occurs to you. The bar is low: if it felt like a good thought, it’s worth capturing.
    2. Get the transcript. Most phones transcribe automatically now. Email it to yourself. Drop it into a folder. The transcript doesn’t need to be clean — raw, stream-of-consciousness transcripts often contain the best material precisely because the thinking wasn’t performed for an audience.
    3. Distill it. This is where the knowledge nodes emerge. Read through the transcript and ask: what are the distinct ideas here? Not the whole conversation — the discrete, transferable concepts that could stand on their own. Name them. Write a short version of each. Now you have content, internal documentation, and a record of how your thinking has developed.

    The Compound Effect Over Time

    The part that most people underestimate is what this builds over time.

    Every distilled conversation adds to a growing body of captured knowledge. Your frameworks. Your methodologies. The specific language you’ve developed for explaining what you do. The patterns you’ve noticed across clients. The hard-won lessons from mistakes.

    Most business owners carry all of this in their heads. It lives and dies with them. It can’t be trained on, delegated from, or built upon because it was never written down. It’s invisible expertise — genuinely valuable, completely uncaptured.

    The voice-first capture habit changes that. Slowly, conversation by conversation, your knowledge base grows. Not because you sat down to build a knowledge base — but because you stopped letting good thinking disappear.

    The Lowest Friction Version

    You don’t need a system. You need a habit with almost no friction:

    Before a conversation you expect to be generative — a client call, a strategy session, a working lunch — hit record. Use your phone’s native voice memo app, or any transcription tool you already have. Tell the other person if it feels right. Most people don’t mind, and some are flattered.

    After, spend five minutes skimming the transcript. Pull out anything that felt sharp. Drop it somewhere — a note, an email to yourself, a folder. That’s it. The distillation can happen later, in batches, when you have help or time.

    The bar for what counts as worth capturing is lower than you think. An offhand explanation that clicked. A way of framing a problem that was new. A question you answered well. These are the raw materials of everything — your content, your training materials, your positioning, your pitch. They’re already in the conversations you’re already having.

    You’re just not catching them yet.

    What is voice-first knowledge capture?

    Voice-first knowledge capture is the practice of recording conversations — client calls, partner discussions, voice memos — and using the transcripts as the raw material for content, documentation, and internal knowledge. It treats talking as the natural input layer for knowledge creation.

    Why is a voice memo better than taking notes?

    Note-taking interrupts thinking. Talking doesn’t. The best version of an idea often surfaces in conversation — when you’re explaining something to someone, being pushed back on, or working through a problem in real time. A transcript captures that thinking at its peak, in your actual voice.

    What do you do with a conversation transcript?

    Read through it and pull out the discrete, transferable ideas — the knowledge nodes. Each one can become a piece of content, a section of internal documentation, or an entry in a knowledge base. The transcript is the raw ingredient; distillation is the process of giving those ideas a usable form.

    How much time does this take?

    The capture itself takes no additional time — you’re recording conversations that are already happening. The distillation can be done in batches and takes as little as five minutes per conversation for a first pass. The system compounds over time without requiring significant ongoing effort.

    Do you need special tools for this?

    No. A phone’s native voice memo app and any transcription tool (many are built into phones and email clients now) are sufficient to start. The system doesn’t require new software — it requires a new habit around the conversations you’re already having.

  • Notion-Deep, Surface-Simple: How to Build Knowledge Systems That Actually Get Used

    There’s a useful architecture for how to hold complex knowledge inside an organization while keeping it accessible to the people who need to act on it.

    Call it Notion-Deep, Surface-Simple: build the internal knowledge structure as deep as you want, then surface it in the voice and format of whoever needs to use it.

    The Core Idea

    Most knowledge management systems fail in one of two directions.

    The first failure: they optimize for depth and comprehensiveness at the expense of usability. The system knows everything, but nobody can navigate it. It becomes the internal equivalent of a technical manual that everyone agrees is accurate and nobody reads.

    The second failure: they optimize for simplicity at the expense of utility. The output is clean and accessible, but the underlying knowledge is shallow. When edge cases show up — and they always do — the system has no answer.

    Notion-Deep, Surface-Simple resolves this by treating depth and accessibility as separate layers with separate jobs, rather than as tradeoffs against each other.

    What the Deep Layer Does

    The deep layer — think of it as the Notion workspace, the knowledge base, the internal documentation — is where you hold everything. It doesn’t compress. It doesn’t simplify. It doesn’t optimize for any particular audience.

    This layer holds the full process documentation. The exception cases. The history of why decisions were made. The technical architecture. The client-specific context that only your team knows. The frameworks that took years to develop. All of it goes here, as deep as it needs to go.

    The standard for this layer is completeness and retrievability — not readability for a general audience.

    What the Surface Layer Does

    The surface layer is not a simplified version of the deep layer. It’s a translation of it — rendered in the specific voice, vocabulary, and complexity level of whoever needs to act on it.

    The translation is the work. You pull from the deep layer exactly what’s needed for a specific person to make a specific decision or take a specific action. You render it in their language. You strip everything else.

    A prospect presentation pulls from the deep layer but speaks in the prospect’s language. A client onboarding document pulls from the deep layer but speaks in operational terms the client’s team actually uses. A quick brief for a new team member pulls from the deep layer but surfaces only the context they need to start.

    The depth doesn’t disappear. It’s available when the conversation earns it. But the default output is calibrated, not comprehensive.

    Why This Architecture Works

    When depth and accessibility are treated as tradeoffs, you’re always sacrificing one for the other. Every time you simplify, you lose fidelity. Every time you add depth, you lose accessibility.

    When they’re treated as separate layers, neither has to compromise. The deep layer stays complete. The surface layer stays accessible. The intelligence is in the translation — knowing what to pull, what to leave in, and how to render it for who’s in front of you.

    This also means the system scales. As the deep layer grows, the surface layer doesn’t have to get more complex. It just draws from a richer source. The translation skill remains constant even as the underlying knowledge compounds.

    How to Build This in Practice

    The starting point is a clear separation of intent. When you’re adding something to your knowledge base — documentation, process notes, client history, research — you’re feeding the deep layer. Don’t self-censor for a hypothetical reader. Put in everything that’s true and useful.

    When you’re building an output — a proposal, a client update, a training document, a content piece — you’re working the surface layer. Start from the deep layer as your source. Then translate deliberately: who is this for, what do they need to know, and in what voice will it land?

    Over time, the habit becomes automatic. The deep layer becomes the intelligence layer. The surface layer becomes the communication layer. And the translation between them — which is where most of the real thinking happens — becomes the core competency.

    What does Notion-Deep, Surface-Simple mean?

    It’s a knowledge architecture principle: build your internal knowledge base as deep and comprehensive as you need, then surface outputs from it in the specific voice and format of whoever needs to act on the information. Depth and accessibility are separate layers, not tradeoffs.

    What’s the difference between simplifying and translating?

    Simplifying removes information. Translating renders the same information in a different register. The goal is translation — pulling the right pieces from the deep layer and expressing them in the receiver’s language, without losing the underlying substance.

    Why do most knowledge systems fail?

    They optimize for either depth or accessibility, treating them as competing priorities. The result is either a comprehensive system nobody navigates or an accessible system that can’t handle edge cases.

    How does this scale as the knowledge base grows?

    As the deep layer grows richer, the surface layer draws from a better source without becoming more complex itself. The translation skill stays constant even as the underlying knowledge compounds over time.

  • Input/Output Symmetry: Return the Answer in the Voice It Was Asked

    There is a simple principle that improves almost every type of professional communication, and it costs nothing to implement.

    Call it input/output symmetry: whatever voice someone uses to ask a question, that is the voice you return the answer in.

    What Input/Output Symmetry Means

    When someone asks you something, they give you a signal. The signal is not just the question itself — it’s the way they asked it. The vocabulary they chose. The complexity level they assumed. The tone they used. The length of their message.

    Input/output symmetry says: honor that signal in your response.

    If someone sends you a two-sentence question in plain language, a five-paragraph technical response is a mismatch. Not because five paragraphs is wrong — but because the complexity of your output dramatically exceeds the complexity of their input. That asymmetry creates friction. It says, implicitly, that you didn’t fully receive what they sent.

    If someone sends you a detailed, technically sophisticated question that shows they’ve done their homework, a shallow surface-level answer is an equal mismatch. It signals that you underestimated them.

    Symmetry is the standard. Match the register. Match the depth. Match the voice.

    This Isn’t Just a Sales Principle

    Input/output symmetry gets talked about most often in sales contexts — mirror the prospect, match their energy, build rapport through language alignment. All of that is real.

    But the principle applies equally in operations, in content, and in internal communication.

    In operations: When a frontline employee is being trained on a new process, the training document should be written in the language the frontline employee uses — not the language of the system architect who designed the process. The person executing a step in a hospital intake doesn’t need to know it’s called a “multi-step EHR synchronization workflow.” They need to know: go to that computer, open that folder, put it in the file.

    In content: When you’re writing for a specific audience, the output should match the complexity and vocabulary of how that audience talks about the topic — not how you talk about it internally. This is the difference between content that feels written for the reader and content that feels written for the writer’s own credibility.

    In client communication: When a client asks a simple question, give a simple answer. When a client asks a complex question, give a complex answer. The mistake is having only one mode and applying it to every interaction regardless of input signal.

    The Common Failure Mode

    The most common failure of input/output symmetry is output that always exceeds input complexity. This is the “I give them too much back” pattern.

    It comes from a good place — you want to be thorough, comprehensive, and demonstrably expert. But when the input was simple and the output is exhaustive, the net effect is not “this person is impressive.” The net effect is “this person doesn’t listen.”

    The fix is not to give less. The fix is to actually receive the input — the full signal, including how it was asked — before you respond. Let that signal dictate the register of your output.

    A Practical Test

    Before sending any significant response — email, proposal, pitch, explanation — read what was sent to you one more time. Ask yourself: does my response match the register, length, and vocabulary of what they sent? If the answer is no, that’s your edit.

    You don’t have to simplify the underlying work. You have to calibrate the delivery. The sophistication is still there. The architecture is still there. It’s just rendered in a form that matches the receiver.

    What is input/output symmetry?

    Input/output symmetry is the principle of returning an answer in the same voice, register, and complexity level as the question that was asked. The way someone asks gives you a signal about how they want to receive information — the principle says to honor that signal.

    Is this just about sales communication?

    No. Input/output symmetry applies equally to operations, content, training documentation, and internal team communication — anywhere one person is conveying information to another and the receiver’s context matters.

    What’s the most common failure of this principle?

    Output that consistently exceeds input complexity. Responding to a simple two-sentence question with five paragraphs of technical detail. It signals that you didn’t fully receive what was sent.

    How do you apply this in practice?

    Before responding, re-read what was sent. Ask: does my response match the register, length, and vocabulary of what they sent? If not, calibrate before you send.

  • Universal Language vs. Company Language: Two Vocabulary Layers Every Communicator Needs

    There are two distinct vocabulary layers that govern how people communicate inside any industry, and most content and communication work conflates them.

    Understanding the difference — and building both deliberately — is one of the highest-leverage things you can do to make your communication feel native rather than imported.

    Layer One: Universal Industry Language

    Universal industry language is the shared vocabulary that travels consistently across every company in a vertical. It’s the terminology that practitioners use without defining it, because everyone who works in that field already knows what it means.

    In healthcare: the “face sheet” is the document that summarizes a patient’s information at the top of a chart. Every hospital calls it that. You don’t explain it — you just use it.

    In property restoration: “Resto” and “Dehu” are shorthand for specific categories of work. In retail: MOD means manager on duty. In logistics: ETA, FTL, LTL are assumed knowledge.

    This layer is learnable. It lives in trade publications, certification materials, job descriptions, and any content written by and for industry practitioners. Build a glossary of universal industry terms before you write a word of content for a new vertical, and your work immediately reads as insider rather than outsider.

    Layer Two: Company Language

    Company language is the internal dialect that develops within a specific organization. It doesn’t transfer across companies, even within the same industry. It’s shaped by team culture, internal tools, historical decisions, and sometimes just the way one influential person at the company talked about something early on.

    This is the vocabulary that shows up in internal Slack channels, in how a team describes their own workflow, in the nicknames that get attached to products or processes or recurring situations. It often never makes it into any official documentation. You learn it by listening, by reading the company’s own content carefully, and sometimes by just asking.

    A prospect might refer to their CRM as “the system.” Their onboarding process might be internally called something that has nothing to do with what it’s officially named. Their main product line might have an internal nickname that their sales team uses but their marketing team doesn’t.

    When you use their language back at them, the effect is immediate. It signals that you paid attention. It creates a sense that you are already on their team, not pitching from outside it.

    Why Most Communication Work Stops at Layer One

    Layer one is the obvious layer. You can research it. You can build a glossary from public sources. It’s systematic and scalable.

    Layer two requires proximity. It requires listening before speaking. It requires time with the actual humans at the company, not just their external-facing content. Most content and outreach workflows don’t have a step for this — not because it isn’t valuable, but because it’s harder to systematize.

    The opportunity is there precisely because most people skip it.

    How to Build Both Layers Before You Write

    For layer one: read trade publications, certification materials, and forum conversations in the target vertical. Flag every term used without definition. Build a reference glossary before any content is written.

    For layer two: read the company’s blog posts, case studies, job postings, and leadership team’s LinkedIn content. Look for language that’s idiosyncratic — terms or framings that don’t appear in competitors’ content. If you have access to the prospect directly, listen carefully in early conversations for words they use consistently. Use those words back.

    Together, these two layers give you something most communicators don’t have: a vocabulary that feels native at both the industry level and the individual company level. That combination creates the feeling — even if the prospect can’t articulate why — that you understand them specifically, not just their category.

    What is universal industry language?

    Universal industry language is shared terminology that travels consistently across all companies in a vertical — terms every practitioner knows without needing a definition. Examples include “face sheet” in healthcare or “Reto” in restoration.

    What is company language?

    Company language is the internal dialect that develops within a specific organization — nicknames, shorthand, and internal framing that doesn’t transfer across companies, even in the same industry.

    Why does using a company’s own language matter?

    When you use a prospect’s or client’s specific language back at them, it signals that you listened before you spoke. It creates the feeling that you’re already on their team rather than pitching from outside it.

    How do you research company-specific language?

    Read their blog, case studies, job postings, and leadership team’s LinkedIn content. Look for terms that appear consistently but don’t show up in competitors’ content. In direct conversations, listen for words they use repeatedly and use those words back.

  • The Complexity Dial: Finding the Register Where Expertise Meets Accessibility

    There’s a specific tension every expert faces when communicating their work. It’s not about whether you know enough. It’s about where you set the dial.

    Go too technical: the work isn’t approachable. The prospect can’t see themselves using it. The client feels like they need a translator just to follow the conversation. They disengage — not because they’re not smart, but because the cost of staying engaged is too high.

    Go too simple: the work doesn’t appear valuable. You’ve hidden the sophistication that earns the premium. The prospect sees a commodity. They wonder if they could just do this themselves.

    The complexity dial is real. And finding the right setting isn’t instinct — it’s a learnable skill.

    Why the Default Is Always Too Technical

    Experts default toward complexity for a reason that feels rational: you want people to understand what you built. You’ve invested in the architecture, the system, the methodology. You want credit for it.

    The problem is that credit for complexity doesn’t come from complexity itself. It comes from the outcome the complexity produces. And outcomes are most legible when they’re explained simply.

    When someone asks you what you do, they are not asking for the architecture. They are asking for the result. “I build AI-powered content systems that rank on Google” is more credible to a non-technical buyer than a description of the pipeline that produces it — even though the pipeline is impressive, and even though you should absolutely understand and be able to speak to it when the moment calls for it.

    How to Find the Right Setting

    The right complexity setting is not a fixed point. It moves based on who you’re talking to, what stage of the relationship you’re in, and what decision you’re trying to help them make.

    A useful calibration question: what is the one thing this person needs to understand to move forward?

    Not the ten things. Not everything you know. The one thing. That’s your anchor. Build your explanation from that point outward, adding complexity only as far as is necessary to make that one thing credible and actionable.

    Another useful signal: listen for when someone stops asking follow-up questions. In a live conversation, the questions stop either because they understand or because they’ve given up. Your job is to read which one it is. Silence after complexity is usually disengagement, not comprehension.

    The Two-Version Rule

    For anything you communicate regularly — your services, your process, your results — it’s worth building two versions deliberately:

    The technical version is for peers, for audits, for documentation, for conversations where the other person has signaled they want to go deep. It doesn’t simplify. It’s accurate and complete.

    The accessible version is for first conversations, for clients who are focused on outcomes, for anyone who hasn’t yet signaled they want the technical version. It doesn’t dumb things down. It leads with the result, earns the trust, and holds the technical detail in reserve.

    The mistake is using only one. The expert who only has the technical version loses approachable audiences. The expert who only has the accessible version never earns sophisticated ones.

    What This Looks Like in Real Work

    A client asks: “What do you actually do for SEO?”

    Technical version answer: “We run a full AEO/GEO content pipeline with schema injection, entity saturation, internal link graph optimization, and structured FAQ blocks targeting featured snippets and AI overview placement.”

    Accessible version answer: “We make sure that when someone searches for what you do, Google shows your site — and shows it in a way that answers their question directly, so they click.”

    Both are accurate. Only one is appropriate for the first conversation with a prospect who runs a restoration company and has never thought about AEO in their life. The technical version comes later — after the trust is built, after they’ve asked to understand more, after the relationship has earned it.

    What is the complexity dial in communication?

    The complexity dial refers to the register of technical depth you use when explaining your work. Too technical and you lose approachability. Too simple and you sacrifice perceived value. The right setting depends on who you’re talking to and what decision they need to make.

    Why do experts default to overly technical communication?

    Experts default toward complexity because they want credit for what they built. But credit comes from the outcome, not the architecture. Outcomes are most legible when explained simply.

    How do you find the right complexity level?

    Ask: what is the one thing this person needs to understand to move forward? Build your explanation from that anchor, adding complexity only as far as necessary to make it credible and actionable.

    Should you always simplify your communication?

    No. The goal is calibration, not permanent simplification. Build both a technical version and an accessible version of your key messages, and deploy each when the audience has signaled which one they need.