There’s a simple version of the AI-in-organizations problem that’s wrong: you build the system, give it access to the right data, write a thorough system prompt, and it operates in your organizational context. The prompt is the context. The context is the prompt.
This framing is everywhere. It’s also the reason most organizational AI deployments produce work that is technically correct and somehow off.
The context that matters — the context that determines whether a decision lands right, whether a draft feels aligned, whether a flagged opportunity is genuinely actionable — is not stored anywhere. It lives between people.
Every organization operates on a layer of standing assumptions that nobody explicitly maintains and nobody could fully articulate on request. Not values, not principles, not priorities — something below those. The interpretive substrate that makes the documented values mean anything.
When someone joins a team and violates one of these assumptions — proposes the wrong thing in the wrong meeting, pushes a decision that is technically within their authority but somehow not theirs to make, surfaces a priority the organization agreed to de-emphasize without announcing it — everyone feels it. The violator usually doesn’t. The substance was fine. Something else was wrong.
That something else is the context AI systems don’t have.
Documentation can encode explicit knowledge. It cannot encode the community that makes the documentation mean anything.
A system prompt can say “this organization prioritizes speed over perfect.” What it cannot encode is whether that norm has actually been consistent for the last six months, or whether leadership has been quietly walking it back after three bad launches, or whether it applies to customer-facing work but not internal infrastructure, or whether the one person whose approval you need is the one exception to the norm.
The standing assumptions are not stored. They are enacted. They show up in what gets committed to and what sits in the inbox for thirty days.
Watch a team’s queue long enough and you can read the context. Not from the items themselves — from the pattern of what moves and what doesn’t. Stalled items tell you which commitments have real backing and which are aspirational. Rapid movement in one lane tells you where the actual authority is concentrated. The gap between what the organization says it prioritizes and what it actually processes is a map of the standing assumptions it hasn’t named.
A single operator can solve this. They can read the board, feel the friction, and say: the predicate is wrong. The item needs to be reframed before it moves. They can do this because they hold the context in their own head, accumulated over months, updated daily.
A team cannot do this as easily. The context is distributed. Each person holds part of it. The standing assumptions live in the gaps between what anyone would say individually. Ask the team to write down why something has been stalled for thirty days and you’ll get five different answers, each of which is partially true, none of which is sufficient.
The naive solution is documentation. Write the standing assumptions down. Build a better system prompt. Give the AI more context.
This helps at the margins. It doesn’t solve the problem.
Documentation of standing assumptions produces a different artifact — a curated version of the context, shaped by whoever did the writing, frozen at the moment of writing, immediately in tension with the organizational reality it was supposed to encode. It becomes a reference document. The context moves on. The document does not.
The less naive solution — the one organizations rarely take — is to treat context as an ongoing artifact rather than a static one. Not a document but a practice. Something that gets updated not when someone decides to update it, but when a decision is made that the prior version couldn’t have predicted.
Every time a team makes a decision that would have surprised an outside observer, that decision contains information about the organizational context. The surprise is the data. The question is whether anyone captures it — not as documentation but as signal, living in the same system as the work itself.
This is not how most organizational AI deployments are built. They treat context as given — encoded once, referenced forward. The system prompt goes stale six weeks in and nobody notices because the outputs are still technically correct. The work product is fine. The alignment is drifting.
A system that can only read your context is a tool. A system that reads the gaps between your documented context and your actual decisions is starting to understand something harder to name.
The implication isn’t that AI systems need more access. More access to documented context doesn’t help if the relevant context isn’t documented. The implication is that organizational deployment requires a different architecture: one where the context layer is treated as a first-class input that needs active maintenance, and where the signal for updating it is not a calendar prompt but a decision that contradicts the prior version.
This is harder to build than a thorough system prompt. It requires the organization to treat its own implicit knowledge as an artifact worth maintaining — which means surfacing it, which requires the uncomfortable process of naming standing assumptions that everyone was benefiting from not naming.
The systems that work at organizational scale will have solved this. Not by encoding context better but by treating context as a process rather than a state.
Prior pieces in this series have addressed the individual operator: memory as infrastructure, capture versus commitment, the discipline of waiting. Those all assumed a single person holding the context in their own head, updated daily, acted on personally.
The team changes the shape of the problem. Not because teams are harder — though they are — but because the context is no longer located anywhere. It exists only in the aggregate of how the team behaves, and that aggregate is not readable from any single vantage point, including the AI’s.
The context lives between people. You cannot put it in the prompt. The first step is admitting that.
The second step — what an organization can actually do about it — is less clean than any framework suggests, and probably requires a different piece.

Leave a Reply