The same three variables that determine whether a knowledge contribution earns API tokens — novelty, specificity, and density — are the same three variables that determine whether a piece of content compounds or evaporates.
This is not a coincidence. It is the same underlying problem: how do you measure whether a unit of information actually adds something to what already exists?
Most content fails the test. Not because it is badly written, but because it does not clear the delta threshold. It confirms what readers already know, it gestures at specifics without landing them, and it spreads thin across a lot of words. By the metrics of a knowledge contribution scoring system, it would earn near-zero tokens. By the metrics of search and AI systems, it performs accordingly.
Novelty: The Content Delta Problem
In a knowledge token system, novelty is measured as the gap between what the knowledge base contained before a submission and what it contains after. The same logic applies to content. The question is not whether your article covers a topic — it is whether it moves the conversation forward on that topic.
Most content on any given subject is paraphrase. Someone reads the top three ranking articles, recombines the information in a slightly different order, and publishes. The delta is near zero. The knowledge base — the collective of what is publicly known about this topic — does not change. Neither does the reader’s understanding.
High-novelty content introduces a framework that did not exist before, surfaces a counterintuitive finding, documents a process that has never been written down, or names a pattern that practitioners recognize but no one has articulated. It changes what a reader knows, not just what they have read. That is the delta. That is what scores.
Specificity: The Precision Test
In the knowledge token system, specificity separates high-scoring from low-scoring contributions. A vague answer — “we usually handle it within a few days” — scores low. A precise answer with named processes, real numbers, and identified edge cases scores high.
Content works the same way. “Restoration contractors should document damage thoroughly” is a zero-specificity statement. Every reader already knows this and leaves no smarter than they arrived. “Restoration contractors should photograph structural damage at minimum three angles — wide, mid, and close — and timestamp each image before touching anything, because public adjusters use photo metadata to establish pre-mitigation condition in supplement disputes” is a specific statement. It contains a named process, a reason, and a downstream consequence. A reader learns something they can act on.
Specificity is also the primary differentiator between content that gets cited by AI systems and content that does not. Language models are not looking for topic coverage — they are looking for the most precise, actionable answer to a question. Vague content does not get cited. Specific content does. The knowledge token scoring model and the AI citation model are measuring the same thing.
Density: Signal Per Word
The third variable in knowledge contribution scoring is density — how much usable signal per word. A two-sentence answer that contains a genuinely novel, specific insight outscores a three-paragraph answer full of generalities.
Most content has low density by design. The SEO paradigm of the last decade rewarded length, and writers learned to stretch. Introductory paragraphs that restate the headline. Transitions that summarize what was just said. Conclusions that recap the article. None of this adds signal. It adds word count.
High-density content treats the reader’s attention as the scarce resource it is. Every sentence either introduces new information, sharpens a previous point, or provides a concrete example that makes an abstraction actionable. Nothing restates. Nothing pads. The piece ends when the information ends, not when a word count target is hit.
This is increasingly what AI systems reward as well. Google’s helpful content guidance, AI Overview citation behavior, and Perplexity’s source selection all trend toward density over volume. The piece that says the most useful thing in the fewest words wins. Not the piece that covers the topic most thoroughly in the most words.
Building Content Like a Knowledge Contributor
If you applied knowledge contribution scoring to your content before publishing, what would change?
The pre-publish question becomes: what does a reader know after finishing this that they did not know before? If the answer is “roughly the same things, expressed slightly differently,” the piece fails the novelty test and should not publish in its current form. If the answer is “they now understand specifically how X works, with a concrete example they can apply,” it passes.
The editorial discipline this creates is uncomfortable. It eliminates a lot of content that feels productive to write. Topic coverage for its own sake. Articles that establish presence on a keyword without earning it through actual insight. Content that fills a calendar slot without filling a knowledge gap.
What it produces instead is a smaller body of work with significantly higher per-piece value. Each article functions like a high-scoring contribution: it adds to the collective knowledge base in a measurable way, earns citations from AI systems that are looking for exactly this kind of precise, novel information, and compounds over time because it contains something that was not available before it was written.
The Practical Application
Before writing any piece, run it through the three-variable test:
Novelty check: Search the topic. Read the top five results. Write down one thing your piece will contain that none of them do. If you cannot identify one thing, stop. You do not have a piece yet — you have a summary of existing pieces.
Specificity check: Find every general statement in your outline and ask what the specific version of that statement is. “Contractors should document damage” becomes “contractors should document damage with timestamped photos from three angles before touching anything.” If you cannot make it specific, you do not know it specifically enough to write about it yet.
Density check: After drafting, read every sentence and ask whether it adds new information or restates existing information. Delete everything that restates. If the piece collapses without the restatements, the underlying structure is held together by padding rather than by ideas.
A piece that passes all three tests earns its place. It would score high in a knowledge token system. It will perform accordingly in search, in AI citation, and in the minds of readers who finish it knowing something they did not know before.
That is the only metric that compounds.
Leave a Reply