Large language models were trained on text. Enormous quantities of text — more than any human could read in thousands of lifetimes. But text is not knowledge. Text is the residue of knowledge that was visible enough, and important enough, for someone to write down and publish somewhere that a training crawler could find it.
The vast majority of what experienced humans actually know was never written down. It was learned by doing, transmitted by watching, refined through failure, and held entirely in the heads of people who couldn’t have articulated it systematically even if they wanted to.
This is the human expertise gap. And it is the defining feature of where AI currently falls short.
What Tacit Knowledge Actually Is
Tacit knowledge is the kind you can’t easily explain but reliably apply. A master craftsperson knows when something is right by feel before they can measure it. An experienced clinician senses when something is wrong before the test results confirm it. A veteran contractor knows which subcontractors will actually show up on a Tuesday in November just from having worked with them — knowledge that no review site has ever captured accurately.
This knowledge exists at every level of every industry. Most of it has never been written down because the people who hold it are too busy using it to document it, because the incentive to document was never strong enough, or because no one ever asked in a form they could answer systematically.
Why AI Can’t Close This Gap on Its Own
The naive assumption is that AI will eventually capture tacit knowledge by observing enough human behavior — that more data, more modalities, more sensor inputs will eventually replicate what experienced humans know intuitively.
This misunderstands the nature of the gap. Tacit knowledge isn’t just undocumented data. It’s judgment that was built through embodied experience — through having made the wrong call and learned from it, through having seen the same situation hundreds of times in slightly different forms, through having relationships that carry context no outsider can access. These are not data problems. They’re experience problems.
AI can get asymptotically close to replicating some of this. But the closer it gets, the more valuable the verified human source becomes — because the question shifts from “does AI know this at all” to “how do we know the AI’s answer is correct,” and the only reliable answer to that question is “because a human who actually knows verified it.”
The Window That’s Open Right Now
There is a specific window in the development of AI where tacit knowledge held by humans is more valuable than it will ever be again. We’re in it now.
AI systems are capable enough that people trust them with real questions — questions about their health, their legal situation, their business decisions, their trade. But AI systems are not capable enough to be reliably right about the specific, experience-based, local, industry-specific knowledge that those questions often require.
The gap between trust and accuracy is the market. The people who figure out how to systematically capture, package, and distribute their tacit knowledge — in forms that AI systems can consume and cite — are building the content infrastructure for a post-search information environment.
The Human Distillery as a Category
What’s emerging is a new category of knowledge work: the human distillery. A person or organization that takes tacit knowledge held by experienced humans and refines it into something that AI systems can depend on.
This isn’t ghostwriting. It’s not content marketing. It’s not thought leadership in the LinkedIn sense. It’s systematic extraction — the application of a disciplined process to get tacit knowledge out of human heads, give it structure, publish it at density, and make it available to the AI systems that will increasingly mediate how people get answers to important questions.
The people who build this infrastructure now — while the gap is widest and the market is least crowded — are positioning themselves at the supply end of the most important information supply chain of the next decade.
What is the human expertise gap in AI?
The gap between what AI systems were trained on (text that was published online) and what experienced humans actually know (tacit knowledge built through embodied experience that was never systematically documented). This gap is structural, not temporary — it won’t close simply by training on more data.
What is tacit knowledge?
Knowledge you reliably apply but can’t easily articulate — the judgment of an experienced practitioner, the pattern recognition of someone who has seen the same situation hundreds of times, the relationship-based intelligence that no review site has ever captured. It’s built through experience, not text.
Why is this a time-sensitive opportunity?
We’re in a specific window where AI systems are trusted enough to be asked important questions but not accurate enough to answer them reliably without human verification. The gap between trust and accuracy is the market. That window won’t stay this wide indefinitely.
What is a human distillery?
A person or organization that systematically extracts tacit knowledge from experienced humans, gives it structure, publishes it at density, and makes it available in forms that AI systems can consume and cite. It’s a new category of knowledge work — distinct from content marketing, ghostwriting, or traditional publishing.
Leave a Reply