Tag: Claude on a Budget

Practical strategies to reduce Claude API and subscription costs without reducing output quality. Model routing, prompt caching, batch API, second brain architecture, and OpenRouter tactics.

  • Per-Model Content Shaping: Write Less, Get Cited More by Claude, ChatGPT, and Perplexity

    Per-Model Content Shaping: Write Less, Get Cited More by Claude, ChatGPT, and Perplexity

    The phrase “optimize for AI search” is almost always wrong. There is no single AI search behavior. Claude, ChatGPT, and Perplexity each have distinct citation patterns — different content structures they reward, different page types they concentrate on, different signals they weight. Writing one undifferentiated article and hoping it gets cited across all three is the same mistake as writing one undifferentiated web page and hoping it ranks for every keyword. This cluster article covers the per-model citation playbook, built from GA4 data and the multi-model roundtable methodology in the Tygart Media Knowledge Lab.

    This is the final cluster in the Claude on a Budget series. For the token economics that make targeted content cheaper to produce, see Output Compression Discipline and Prompt Caching.

    The Three Citation Profiles

    Claude (Anthropic): Concentrates heavily. GA4 data from sites in the Knowledge Lab shows Claude sending approximately 54.5% of its AI referral traffic to just 2 pages per site. It rewards content that is entity-dense, structurally authoritative, and written with speakable precision — defined terms, explicit relationships between concepts, factual density over narrative padding. Claude users tend to be technical and high-intent; the model reflects that by citing content that answers with precision rather than coverage. Approximately 90% of content on a typical site is invisible to Claude — it surfaces a small authoritative set and ignores the rest.

    ChatGPT (OpenAI): Spreads references broadly. Where Claude concentrates on 2 pages, ChatGPT may reference 8-12 across the same site. It rewards breadth, recency, and natural-language accessibility. Content structured like a knowledgeable friend explaining something clearly — without jargon walls — performs well. ChatGPT users skew toward general-purpose questions; the model cites content that covers the question conversationally without assuming deep domain expertise.

    Perplexity: Research-flavored. It rewards sourced claims, comparative tables, explicit statistics, and content that reads like a researched brief rather than an opinion piece or narrative. Perplexity users are actively in research mode; the model surfaces content that looks like it did the research so the user does not have to. Citation-rich, data-dense, table-formatted content punches above its traffic weight in Perplexity referrals.

    The Per-Model Content Shape

    ElementClaudeChatGPTPerplexity
    Density targetHigh — entity-rich, preciseMedium — accessible, broadHigh — sourced, comparative
    Best structureDefined terms, explicit relationships, OASFConversational headers, FAQ blocksTables, stat callouts, comparison matrices
    Ideal length1,500-2,500 words with tight structure800-1,500 words, readable flow1,000-2,000 words with data anchors
    Citation triggerAuthoritative entity coverageQuery-matching accessible answerSourced comparative data

    The Multi-Model Roundtable Methodology

    The Tygart Media Knowledge Lab documents a specific workflow for content research that leverages multiple models’ citation profiles rather than fighting them. The pattern: route the initial research brief to a free or cheap model (Gemini Flash via OpenRouter, or Llama 3 free tier) for broad source gathering. Pass the source list to Claude for entity extraction and authoritative synthesis. Use the Claude-synthesized brief as the foundation for the final article draft. The output is content that is naturally entity-dense from Claude’s synthesis pass while covering enough ground to catch ChatGPT’s broader citation net.

    The token economics matter here: the expensive synthesis pass (Claude Sonnet or Haiku) operates on a pre-filtered source set, not raw web content. Input tokens are lower because a cheaper model did the broad sweep. Claude’s output is higher-density because it is synthesizing structured inputs rather than processing noise. This is the OpenRouter multi-model pipeline in content production form.

    Writing for Claude Citation Specifically

    If your primary goal is Claude citation — high-intent technical traffic, B2B contexts, developer audiences — the content discipline is: define every entity explicitly at first mention, state relationships between concepts directly (“X enables Y because Z”), use speakable sentence structures (subject-verb-object, no buried clauses), include a structured FAQ or definition block, and remove padding. Claude’s citation concentration on 2 pages per site means your best-performing page for Claude referrals will get the bulk of the traffic — invest in making that page entity-complete rather than spreading thin coverage across many pages.

    Writing for Perplexity Citation

    Perplexity citation optimization is the most actionable of the three because the signal is explicit: include comparative tables with real numbers, cite sources inline (even if just attributing claims to specific organizations or studies), use headers that read like research questions, and lead sections with data points rather than narrative. The content in this series — pricing tables, API code examples, usage statistics — is structured for Perplexity citation by design. Every table is a potential Perplexity extraction point.

    The Budget Connection

    Per-model content shaping is a budget strategy, not just a citation strategy. Writing one highly targeted, entity-dense 2,000-word article for Claude citation is cheaper to produce — fewer tokens, tighter output discipline — and more effective than producing three generic 1,500-word articles hoping one gets cited. Concentration over coverage: the same principle Claude uses to cite content, applied to content production itself. The output compression discipline from Cluster 6 makes this article type cheaper to generate. Dense, targeted content is both cheaper to produce with Claude and more likely to be cited by Claude. The budget and the citation strategy converge.

    The Full Claude on a Budget System

    This series has covered seven levers that compound: cold-start elimination via second brain, model routing by task tier, OpenRouter free model integration, Batch API for async 50% discount, prompt caching for 90% off repeated context, output compression discipline, and per-model citation shaping. None of these require negotiating with Anthropic’s pricing team. All of them are available today via the API. Applied together, they represent the difference between paying retail for Claude and operating it at professional efficiency — which, for most teams, means the same Claude capability at 40-70% of the sticker cost.

    Return to the full guide: Claude on a Budget: Complete Guide →

  • Output Compression Discipline: Concentrated Slices vs Full Meals

    Output Compression Discipline: Concentrated Slices vs Full Meals

    Most Claude cost analyses focus on input tokens — the knowledge you send in. The underappreciated lever is output compression. Claude is trained to be thorough. Left unconstrained, it produces full meals: preambles, recaps, hedges, transition sentences, closing summaries. All of those tokens cost money. All of them are often unnecessary. Output discipline — getting Claude to deliver concentrated slices instead of full meals — is often the highest-leverage cost reduction available without changing models or switching to async.

    This is part of the Claude on a Budget series. For input-side compression, see The Cold-Start Problem. For pricing mechanics, see Prompt Caching.

    The Default Verbosity Problem

    Ask Claude to “summarize this document” without constraints and you will get: an opening sentence restating the task, a multi-paragraph summary, a bullet-point recap of the summary, and a closing note about what was not covered. The actual information density — insight per token — is low. You paid for 800 tokens of output and needed 150. Multiply across thousands of API calls and you have built a significant cost leak from default model behavior, not from bad prompts.

    The Output Compression Toolkit

    1. Explicit word and token caps in the prompt. “Respond in 150 words or fewer” is the single most effective instruction for reducing output tokens. Claude respects tight limits. “Be concise” does not work reliably. “150 words maximum” does. For JSON outputs: “Respond with only valid JSON, no markdown fences, no explanation.” Every word of instruction about format is recovered 10x in output reduction across repeated calls.

    2. Structured output schemas. When you need structured data, define the exact JSON schema. Claude stops generating prose and fills fields. You get exactly what you specified and nothing more. The token reduction versus free-form responses is typically 40-70% for equivalent information content.

    # Free-form -- verbose, unpredictable length
    prompt_verbose = "Summarize the key points of this article and their implications."
    
    # Structured -- tight, predictable, cheaper
    prompt_structured = """Extract from this article:
    {"headline": "string", "key_points": ["string", "string", "string"], "sentiment": "positive|neutral|negative"}
    Respond with valid JSON only. No explanation."""

    3. Role-based compression priming. System prompt framing shapes output length. “You are a precise technical writer who values brevity. Never restate the task. Deliver the answer directly.” produces consistently shorter outputs than a neutral system prompt. This is prompt engineering for token economics, not just quality.

    4. Chained micro-tasks over monolithic requests. Instead of asking Claude to research, analyze, synthesize, and format in one prompt, chain smaller requests. Each call is scoped to one task with tight output constraints. Total tokens across the chain are often lower than a single unconstrained request, and intermediate outputs are cacheable — pairing naturally with the prompt caching strategy.

    The Notion Second Brain Application

    The operational implementation at Tygart Media runs this pattern at pipeline level. The Notion second brain eliminates the need for Claude to generate background context — it already exists in structured form. Extractions from Notion arrive as pre-formatted knowledge blocks. Claude’s task is synthesis over existing structured data, not open-ended research and explanation. Output prompts are scoped: “Given this structured data, write a 400-word section for [topic]. No preamble, no conclusion, begin directly with the first point.” The output is a concentrated slice — dense, usable, billable at a fraction of what free-form generation costs for equivalent value.

    Measuring Compression Effectiveness

    Track output_tokens in your API responses. Log them per prompt template. Identify your highest-output templates and run compression interventions — tighter word caps, structured formats, role priming. The target is information density: insight delivered per output token, not raw token count. A 500-token output with 3 actionable insights beats a 200-token output with 1. Compression discipline is about removing the scaffolding (preambles, hedges, recaps) while preserving the load-bearing structure (insight, data, instruction).

    max_tokens as a Hard Ceiling

    Set max_tokens conservatively in your API calls. This is your financial guardrail, not just a model parameter. For classification tasks: 50 tokens. For short summaries: 200 tokens. For structured JSON extraction: 500 tokens. For article drafts: 1,500-2,000 tokens. Leaving max_tokens at the model default (4,096-8,192) on every call is leaving a cost ceiling unjustifiably high. Claude will rarely hit the ceiling on constrained tasks, but it prevents runaway generation on edge-case inputs that can quietly inflate your bill.

    Next: Per-Model Content Shaping: Write Less, Get Cited More →

  • The Batch API: 50% Off for Non-Urgent Claude Work

    The Batch API: 50% Off for Non-Urgent Claude Work

    Every dollar you spend on Claude at full synchronous price is a dollar you’re overpaying for non-urgent work. Anthropic’s Message Batches API delivers a flat 50% discount on both input and output tokens — the same models, the same quality, half the price — with one constraint: results arrive asynchronously, typically within 24 hours.

    This is part of the Claude on a Budget series. If you’re routing models for real-time work, see Model Routing: Haiku vs Sonnet vs Opus. For cutting repeated context costs, see Prompt Caching.

    The Math First

    Standard Sonnet 4.6 pricing: $3.00 input / $15.00 output per million tokens. Batch Sonnet 4.6: $1.50 input / $7.50 output. Run 1,000 article drafts synchronously and you’re spending full rate on every one. Run the same batch overnight and you cut the bill in half — no model quality change, no output degradation, just a different delivery mechanism.

    ModelSync InputSync OutputBatch InputBatch Output
    Haiku 4.5$1.00/M$5.00/M$0.50/M$2.50/M
    Sonnet 4.6$3.00/M$15.00/M$1.50/M$7.50/M
    Opus 4.7$5.00/M$25.00/M$2.50/M$12.50/M

    What Qualifies as Non-Urgent Work

    The honest question is not “does this need to be fast?” — it’s “does this need to be synchronous?” Most content pipelines, data enrichment tasks, classification jobs, and bulk translation runs have no real-time dependency. The user is not waiting at a keyboard. The output feeds a queue. The 24-hour window is irrelevant. Candidates include: nightly article drafts, SEO metadata generation for large post archives, batch product description rewrites, email personalization at scale, sentiment tagging across historical data, bulk summarization of documents or transcripts.

    What does not qualify: customer-facing chat, real-time code completion, any workflow where a human is actively waiting for a response.

    The API Pattern

    import anthropic
    
    client = anthropic.Anthropic()
    
    # Build your batch — each request is a full message payload
    requests_list = [
        {
            "custom_id": f"article-{i}",
            "params": {
                "model": "claude-sonnet-4-6",
                "max_tokens": 2000,
                "messages": [
                    {"role": "user", "content": f"Write a 500-word expert summary of: {topic}"}
                ]
            }
        }
        for i, topic in enumerate(topics)
    ]
    
    # Submit the batch
    batch = client.messages.batches.create(requests=requests_list)
    print(f"Batch ID: {batch.id} | Status: {batch.processing_status}")
    
    # Poll until complete
    import time
    while True:
        status = client.messages.batches.retrieve(batch.id)
        if status.processing_status == "ended":
            break
        time.sleep(60)
    
    # Retrieve results
    for result in client.messages.batches.results(batch.id):
        custom_id = result.custom_id
        if result.result.type == "succeeded":
            text = result.result.message.content[0].text
            print(f"{custom_id}: {text[:100]}...")

    Combining Batch API With Prompt Caching

    These two discounts stack. If your batch requests share a large system prompt — a style guide, a knowledge base, a persona definition — mark that block with cache_control: {"type": "ephemeral"}. Anthropic caches it across all requests in the batch that hit the same prompt prefix. You pay input rate on the first hit and cache read rate (roughly 10% of input rate) on every subsequent hit. A 10,000-token system prompt shared across 500 batch requests: you pay full rate once, cache rate 499 times, and you are already on batch pricing for all output tokens. The compounding effect is significant.

    Structuring Your Pipeline Around Batch Windows

    The practical architecture: identify every Claude call in your current workflow that has no real-time dependency. Move those calls behind a queue. Set a nightly cron that drains the queue into a batch submission at 11 PM. Results are ready by morning. Your synchronous Claude budget drops to customer-facing interactions only — often 20-30% of total volume for content and data operations teams.

    Rate limits are separate for batch vs. synchronous traffic, so batch jobs do not compete with your real-time usage. That is a free operational benefit on top of the price cut.

    Error Handling at Scale

    Batch results include a result.type field: succeeded, errored, or canceled. Always iterate the full result set and collect errored custom_ids for resubmission. At scale — thousands of requests — you will see occasional errors. Build the retry loop into your pipeline from day one rather than discovering it when 3% of a 10,000-request batch silently fails.

    The Honest Tradeoff

    Batch API is a discipline, not a feature. It requires you to think about your Claude usage in terms of urgency tiers, not just prompt quality. Teams that adopt it consistently cut their Claude bills by 30-50% on total spend — not because every call moves to batch, but because the non-urgent majority does. Combined with model routing (Haiku for triage, Sonnet for batch drafts, Opus only for synchronous high-stakes reasoning), it is the highest-leverage cost lever available in the Anthropic stack today.

    Next: Prompt Caching: How to Cut Repeated Context Costs by Up to 90% →