Every AI model tier costs a different amount per token, produces output at a different quality level, and runs at a different speed. Running everything through the most powerful model you have access to isn’t a strategy — it’s a default. And defaults are expensive.
Model routing is the discipline of intentionally assigning the right model tier to the right task based on what the task actually requires. It’s not about using cheaper models for important work. It’s about recognizing that most work doesn’t need the most capable model, and that using a lighter model for that work frees your most capable model for the tasks where its capabilities genuinely matter.
The operators who get the most out of AI infrastructure are not the ones running the most powerful models. They’re the ones who know exactly which model to use for each type of work — and have that routing systematized so it happens automatically rather than by decision on every task.
The Three-Tier Model
The current Claude family maps cleanly to three operational tiers, each suited to a different category of work.
Haiku — the volume tier. Fast, cheap, and capable of tasks that require pattern recognition, classification, and structured output without deep reasoning. The right model for taxonomy assignment, SEO meta generation, schema JSON-LD, social post drafts, AEO FAQ generation, internal link identification, and any task where you need the same operation repeated many times across a large dataset. Haiku is where batch operations live. When you’re processing a hundred posts for meta description updates or generating tag assignments across an entire site, Haiku is the model you reach for — not because quality doesn’t matter, but because Haiku is genuinely capable of these tasks and running them through Sonnet or Opus would be both slower and significantly more expensive without producing meaningfully better results.
Sonnet — the production tier. The workhorse. Capable of nuanced reasoning, long-form drafting, and the kind of editorial judgment that separates useful content from generic output. The right model for content briefs, GEO rewrites, thin content expansion, flagship social posts that need real voice, and the article drafts that feed the content pipeline. Sonnet handles the majority of actual content production work — it’s the model that runs most sessions and most pipelines. When you need something that reads like a human wrote it with genuine thought applied, Sonnet is the default choice.
Opus — the strategy tier. Reserved for work where depth of reasoning is the primary value. Long-form articles that require original synthesis, live client strategy sessions where you’re working through a complex problem in real time, and any situation where you’re making decisions that will cascade through multiple downstream systems. Opus is not for volume. It’s for the tasks where running a cheaper model would produce an output that looks similar but misses the connections, nuances, or strategic implications that make the difference between advice that’s directionally right and advice that’s actually useful.
The Routing Rules in Practice
The routing framework isn’t abstract — it maps specific task types to specific model tiers with enough precision that sessions can apply it without deliberation on each individual task.
Haiku handles: taxonomy and tag assignment, SEO title and meta description generation, schema JSON-LD generation, social post creation from existing article content, AEO FAQ blocks, internal link opportunity identification, post classification and categorization, and any extraction or formatting task applied across more than ten items.
Sonnet handles: article drafting from briefs, GEO and AEO optimization passes on existing content, content brief creation, persona-targeted variant generation, thin content expansion, editorial social posts that require voice and judgment, and the majority of single-session content production work.
Opus handles: long-form pillar articles that require original synthesis across multiple sources, live strategy sessions with clients or within complex multi-system planning work, architectural decisions about content or technical systems, and any task where the output will directly inform other significant decisions.
The dividing line between Sonnet and Opus is usually this: if the task requires judgment about what matters — not just execution of a clear brief — Opus earns its cost premium. If the task has a clear structure and Sonnet can execute it well, escalating to Opus produces marginal improvement for a significant cost increase.
The Batch API Rule
Separate from model selection is the question of whether to run tasks synchronously or in batch. The Batch API applies to any operation that meets three conditions: more than twenty items to process, not time-sensitive, and a format or classification task that produces deterministic-enough output that you can verify results after the fact rather than in real time.
The Batch API cuts token costs meaningfully on qualifying operations. The tradeoff is latency — batch jobs run on a delay rather than returning results immediately. For the right task category, this is a pure win: you pay less, the work gets done, and the latency doesn’t matter because the output wasn’t needed in real time anyway. For the wrong category — anything where you’re making decisions in a live session based on the output — batch is the wrong tool regardless of cost.
Taxonomy normalization across a large site is the canonical batch use case. You’re not making live decisions based on the output. The task is highly repetitive. The result is verifiable. The volume is high enough that the cost difference is meaningful. Run it in batch, verify results afterward, and move on.
The Token Limit Routing Rule
There’s a third routing decision that most operators don’t think about explicitly: what to do when a session hits a context limit mid-task. The instinctive response is to start a new session with the same model. The better response is often to drop to a smaller model.
When a Sonnet session runs out of context on a task, the task that triggered the limit is usually a constrained, well-defined operation — exactly the kind of thing Haiku handles well. Switching to Haiku for that specific operation, completing it, and returning to Sonnet for the continuation is a more efficient pattern than restarting the full session. The smaller model fits through the gap the larger model couldn’t navigate because context limits aren’t a capability failure — they’re a resource constraint. A smaller model with a fresh context window can often complete the task cleanly.
This is the counterintuitive version of model routing: sometimes the right model for a task is determined not by the task’s complexity but by the state of the session when the task arrives.
The Cost Architecture of a Content Operation
Model routing at the operation level — not just the task level — determines what a content operation actually costs to run at scale.
A single article through the full pipeline touches multiple model tiers. The brief comes from Sonnet. The taxonomy assignment goes to Haiku. The article draft is Sonnet. The SEO meta is Haiku. The GEO optimization pass is Sonnet. The schema JSON-LD is Haiku. The quality gate scan is Haiku. The final publish verification is trivial — no model needed, just a curl call.
That pipeline uses Haiku for roughly half its operations by count, even though the output is a fully optimized article. The expensive model tier — Sonnet — runs for the creative and editorial work where its capabilities matter. Haiku runs for the structured, repetitive work where it’s genuinely sufficient. The result is an article that costs a fraction of what it would cost to run every stage through Sonnet, with no meaningful quality difference in the output.
Multiply that across a twenty-article content swarm, or an ongoing operation managing a portfolio of sites, and the routing decisions made at the pipeline level determine whether the economics of AI-native content production are sustainable or not. Running everything through the most capable model isn’t just expensive — it makes scale impossible. Routing correctly is what makes scale practical.
When to Override the Routing Rules
Routing frameworks are defaults, not laws. There are situations where the right answer is to override the default tier upward — and being able to recognize them is as important as having the routing rules in the first place.
Override to a higher tier when: the task appears simple but the context makes it consequential (a brief that seems like a standard format task but will drive a month of content production), when you’re working with a client directly and the output will be read immediately (live sessions always get the appropriate tier regardless of task type), or when you’ve run a task through a lighter model and the output reveals that the task had more complexity than the routing rule anticipated.
The routing framework is a starting point that gets refined by observation. When Haiku produces output that’s consistently good enough for a task category, the routing rule holds. When it produces output that requires significant correction, that’s a signal to move the task category up a tier. The framework learns from its own failure modes — but only if the operator is paying attention to where the defaults break down.
Frequently Asked Questions About AI Model Routing
Is model routing worth the operational complexity?
For single-task users running occasional sessions, no — the default to a capable model is fine. For operators running content pipelines across multiple sites with high task volume, yes — the cost difference at scale is substantial, and the operational complexity of a routing framework is lower than it appears once the rules are systematized into pipeline architecture.
How do you know when a task is genuinely Haiku-appropriate vs. Sonnet-appropriate?
The test is whether the task requires judgment about what the right answer is, or execution of a clear structure. Haiku excels at the latter. If you can write a complete specification of what the output should look like before the model runs — format, constraints, criteria — it’s likely Haiku-appropriate. If the value comes from the model deciding what matters and making editorial choices, it needs Sonnet at minimum.
What about using non-Claude models for specific tasks?
The routing logic applies across model families, not just within Claude tiers. For image generation, Vertex AI Imagen tiers serve the same function — Fast for batch, Standard for default, Ultra for hero images. For specific tasks where another model has a demonstrated capability advantage, routing to that model is the right call. The principle is the same: match the model to what the task actually requires, not to what’s most convenient to run everything through.
Does model routing apply to agent orchestration?
Yes, and it’s especially important there. In a multi-agent system, the orchestrator that plans and delegates work benefits most from the highest-capability model because its output determines what every downstream agent does. The agents executing specific sub-tasks can often run on lighter models because they’re executing clear instructions rather than making judgment calls about what to do. Opus orchestrates, Haiku executes, Sonnet handles the middle layer where judgment and execution are both required.
How do you handle tasks where you’re not sure which tier is right?
Default to Sonnet for ambiguous cases. Haiku is the right downgrade when you have confidence a task is purely structural. Opus is the right upgrade when you have evidence that Sonnet’s output isn’t capturing the depth the task requires. Running something through Sonnet when Haiku would have sufficed costs money. Running something through Haiku when Sonnet was needed costs correction time. For most operators, the cost of correction time exceeds the cost of the token difference — which means when genuinely uncertain, the middle tier is the right hedge.
Leave a Reply