Tacit Knowledge Extraction: Why the Behavior Comes Before the AI System

Tygart Media Strategy
Volume Ⅰ · Issue 04Quarterly Position
By Will Tygart
Long-form Position
Practitioner-grade

Every organization has two kinds of knowledge. The first kind is documented: processes, policies, training materials, SOPs. The second kind is tacit: the adjustments people make without thinking, the thresholds they’ve learned from experience, the judgment calls they can execute in seconds but couldn’t explain in a meeting.

The documented knowledge is easy to feed into an AI system. The tacit knowledge is what makes the organization actually work — and it’s almost never in a format that AI can use.

The gap between these two knowledge types is where most enterprise AI implementations fail. Companies feed their AI the documentation and wonder why it can’t give the same answers a 10-year veteran would give. The answer is that the 10-year veteran isn’t running on the documentation. They’re running on the tacit layer — and nobody captured it.

What Tacit Knowledge Extraction Actually Requires

You cannot extract tacit knowledge through forms, surveys, or documentation requests. Tacit knowledge by definition is knowledge that the holder cannot fully articulate without a skilled interviewer pulling it out. The behavior that surfaces it is specific: a conversational sequence that descends through four distinct layers.

Layer 1 — Surface protocol: “What’s your process when X happens?” This gets the documented version — what people think they do, what they’d write in an SOP. Necessary baseline but not the target.

Layer 2 — Exception probing: “When do you deviate from that?” This surfaces the adaptive layer — the judgment calls that experience produces. The deviations are where tacit knowledge lives.

Layer 3 — Sensory and somatic: “How do you know it’s that specific problem and not something else?” This is the hardest layer to surface and the most valuable. It captures knowledge that the holder has never verbalized — pattern recognition so ingrained it operates below conscious awareness.

Layer 4 — Counterfactual pressure: “What would break if you weren’t here tomorrow?” This surfaces the knowledge hierarchy — what actually matters versus what’s ritual. Most organizations don’t know which is which until the person with the knowledge leaves.

The Behavior Determines the Tool Stack

Once this extraction behavior is understood, the tool selection for the AI system becomes clear. You need: a way to capture the conversation at high fidelity, a way to convert the transcript into structured knowledge artifacts, a storage layer that preserves the knowledge in a format AI systems can query, and an embedding layer that makes the knowledge semantically searchable.

These are four distinct behaviors served by four distinct tools. The extraction conversation is a human behavior — no tool replaces it. The structuring is where AI earns its keep: running the transcript through multiple models with different attack angles, identifying the tacit signatures embedded in the language, organizing the output into the knowledge concentrate schema. The storage is a database decision. The embedding layer is a vector store.

None of these tool choices could have been made intelligently without first understanding the extraction behavior. The behavior is the constraint that makes the tool selection tractable.

The Minimum Viable Experiment

For any organization that wants to capture its tacit knowledge layer before it walks out the door: four extraction conversations, transcribed and run through a three-model distillation round, produce a knowledge artifact dense enough to answer questions that the documentation cannot. The experiment takes a week and costs almost nothing. The cost of not doing it shows up when the person who holds the knowledge leaves and the organization discovers, for the first time, how much was never written down.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *