Nobody Made This Decision

The most interesting organizational failures share a structure. Nobody was wrong. Every decision that contributed to the outcome was locally correct — defensible, even good. The damage was done in the space between decisions, in the gaps between the partial contexts each party was operating from.

That is a different problem from the one most accountability systems are built to address.


The standard model of organizational accountability follows a decision tree. Something went wrong. Trace backward: who made the call? What did they know? Was the call reasonable given what they knew? The model assumes most failures have a responsible party — someone who had sufficient context to have known better, or who made a call that violated the information they held.

This model handles a lot of real failures correctly. It is not wrong. It just misses an entire category.

The category where every party had incomplete context. Every party made the reasonable call given what they held. And the aggregate was wrong in a way that was not visible from any single vantage point.

Call it the distributed blindspot. It is not a gap in any individual’s knowledge — it is the gap between their partial views. Nobody owned it because nobody could see it. It was not a failure of judgment. It was a failure of structure.


The Pattern

This happens constantly. Three teams each make rational decisions about a shared situation, each unaware of what the other two are doing. A project stalls because four people are each waiting on the others under different assumptions about who holds the blocking predicate. A strategy runs for two years on an implicit assumption everyone believes someone else confirmed.

The damage does not show up on anyone’s record. Nobody made a wrong call. The wrong outcome happened because the right calls never aggregated into a coherent view.

Article 37 argued that the context relevant to organizational AI deployment is not documented anywhere — it lives between people as standing assumptions, enacted through decisions, readable only in the pattern of what moves and what stalls. Documentation of this layer produces a curated version that is already wrong before it is finished.

What follows from that, and what Article 37 declined to take the second step on: when decisions are made between instances of partial context — not just held by individuals but acted on simultaneously across distributed nodes — the resulting blindspot isn’t in any individual’s view. It’s in the aggregate. And the aggregate, in most organizations, belongs to nobody.


Why AI Makes This Worse Before It Makes It Better

The standard AI deployment is a single system with a context window, serving one operator. That is already a partial-context problem. The system knows what it has been shown, reasons correctly within that, and the gaps between what it was shown and what is actually true constitute the risk surface.

But increasingly, the real deployment picture is multiple instances, multiple agents, multiple systems — each operating from partial and non-overlapping context. Each correct on its own terms. The aggregate, unowned.

This is not a retrieval problem. Giving every instance access to every document does not solve it. The context that matters most was never documented — it is enacted, not stored. Put ten well-configured agents into an organization that has not solved its distributed-blindspot problem and you have ten faster generators of locally correct, collectively incoherent output.

The system cannot tell you that the context it was given is one of several partial views of the same situation, all of them incomplete, none of them flagged as such. It can only reason from what it holds.

Most of the people building multi-agent systems are deeply focused on what each agent can see and do. Almost none of them are asking who owns the aggregate, or whether the aggregate can be owned at all.


The Accountability Gap

Here is the structural failure the distributed blindspot produces: standard accountability doesn’t attach to it.

You can hold someone accountable for a bad decision. You cannot hold anyone accountable for a structural gap — because no single person created it, no single person could have fixed it alone, and the harm doesn’t trace back to a decision. It traces back to the absence of a process that would have forced aggregation.

The absence of a process is not a decision. It is, in most organizations, a default. And that default is increasingly expensive as the speed of locally correct decisions accelerates.

The failure doesn’t announce itself. It looks, from the inside, like a series of reasonable moves. Everyone involved can account for their own actions. The gap between those accounts is where the problem lives — and gaps don’t go in anyone’s ledger.


What Aggregate Ownership Actually Requires

The fix is not more documentation. Not faster communication. Not better individual accountability. Those address individual-context failures. They do not address structural gaps.

What addresses structural gaps is explicit aggregate ownership — someone or something whose function is not to make the local decisions but to ask whether the local decisions cohere. Not an auditor checking individual calls against individual information, but an auditor checking whether the individually correct calls added up to the intended outcome.

This is a different function. In human organizations, the closest approximation is usually whoever has spoken to enough parties to notice when three locally correct decisions are in quiet contradiction. Their value is not knowing more in any individual domain. It is holding more simultaneous partial contexts and noticing the collision — before the collision produces an outcome nobody will be able to explain.

That function is hard to hire for, hard to retain, and almost impossible to delegate. It cannot be systematized easily because the collisions it is looking for are not predictable from any single context window. The skill is peripheral, not focal: staying attuned to the edges of what each party is assuming the others know.

Most multi-agent AI systems have no equivalent of this function at all.


The Uncomfortable Version

Aggregate ownership may be impossible above a certain scale.

Every context-aggregation mechanism I have observed has a bandwidth problem. The person — or system — holding the aggregate can only hold so much of it. The more distributed the operation, the more partial contexts that need to be synthesized, the faster the aggregate degrades. Not through failure but through the genuine impossibility of the job at sufficient scale.

If that is true, it changes the design question fundamentally. It is no longer: how do we achieve aggregate coherence? It is: how do we build systems that tolerate distributed incoherence gracefully — detecting it faster, recovering from it more cheaply, making it visible before it becomes load-bearing?

Those are different engineering problems. They require accepting that some degree of distributed blindspot is structural and permanent rather than a defect to be engineered away. Most of the systems being built right now — organizational and technical — are not designed from that premise. They are designed from the premise that the right process will eventually close the gap.

The gap does not close. It moves.

And in a system where every instance is reasoning faster than ever, with more confidence than ever, on context that remains as partial as it ever was — the gap moves faster too.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *