Category: Tygart Media Editorial

Tygart Media’s core editorial publication — AI implementation, content strategy, SEO, agency operations, and case studies.

  • Claude Sonnet 5: What We Know About the Next Claude Model (2026)

    Claude Sonnet 5: What We Know About the Next Claude Model (2026)

    Model Accuracy Note — Updated May 2026

    Current flagship: Claude Opus 4.7 (claude-opus-4-7). Current models: Opus 4.7 · Sonnet 4.6 · Haiku 4.5. Claude Opus 4.6 referenced in this article has been superseded. See current model tracker →

    Claude AI · Fitted Claude

    Anthropic hasn’t announced Claude Sonnet 5 yet — but based on how they’ve released models so far, here’s what we know about the Claude model roadmap, what Sonnet 5 is likely to look like when it arrives, and how to stay current as the lineup evolves.

    Current status (April 2026): The current Sonnet release is Claude Sonnet 4.6 (claude-sonnet-4-6). Anthropic has not announced a release date or feature set for a Sonnet 5. This page tracks what we know and will be updated as Anthropic makes announcements.

    The Current Claude Model Lineup

    Model API String Status
    Claude Opus 4.6 claude-opus-4-7 ✅ Current flagship
    Claude Sonnet 4.6 claude-sonnet-4-6 ✅ Current production default
    Claude Haiku 4.5 claude-haiku-4-5-20251001 ✅ Current fast/cheap tier
    Claude Sonnet 5 ⏳ Not yet announced

    How Anthropic Releases Models

    Anthropic follows a consistent pattern: new models launch across the Haiku, Sonnet, and Opus tiers, often in sequence rather than simultaneously. Sonnet tends to be the first tier developers get meaningful access to at each generation — it’s the workhorse tier, and Anthropic has historically prioritized making it available broadly.

    Major model generations arrive roughly every several months. Point releases (like 4.5 → 4.6) happen more frequently and often bring targeted capability improvements rather than fundamental architecture changes. A “Sonnet 5” designation would signal a new major generation rather than an incremental update.

    What to Expect From Claude Sonnet 5

    Based on the pattern across Claude generations, each new major Sonnet release has delivered: improved reasoning and instruction-following, better code generation, expanded context handling, and lower cost relative to the previous generation’s Opus tier. The trajectory has consistently moved toward making the mid-tier model do what only the top-tier could do previously.

    Specific feature claims about an unannounced model would be speculation. What’s documented is the direction: Anthropic is investing heavily in extended thinking, agentic capabilities, and multimodal performance. Those priorities will almost certainly shape what Sonnet 5 looks like when it ships.

    How to Stay Current on Claude Model Releases

    The most reliable sources for Claude model announcements:

    • Anthropic’s blog (anthropic.com/news) — official launch announcements
    • Anthropic’s model documentation (docs.anthropic.com/en/docs/about-claude/models) — current API strings and deprecation notices
    • Anthropic’s changelog — incremental updates and point releases
    • This page — updated as new Claude model information becomes available

    Should You Wait for Sonnet 5?

    For most use cases, no. Claude Sonnet 4.6 is a capable production model. If you’re building something today, build on the current model and upgrade when the new one releases — that’s the standard pattern for any production API dependency. Waiting for an unannounced model before starting development rarely makes sense.

    If you’re doing initial architecture decisions and want to understand where the platform is heading, Anthropic’s research publications and roadmap hints from their public communications are worth tracking. But for day-to-day work, the current Sonnet is the right tool.

    For the current model lineup with full specs, see Claude Models Explained: Haiku vs Sonnet vs Opus. For API model strings and how to use them, see Claude API Model Strings — Complete Reference.

    Frequently Asked Questions

    Has Anthropic announced Claude Sonnet 5?

    No. As of April 2026, Anthropic has not announced Claude Sonnet 5 or provided a release date. The current Sonnet model is Claude Sonnet 4.6. This page will be updated when an announcement is made.

    What is the current version of Claude Sonnet?

    The current Claude Sonnet version is Sonnet 4.6, with the API model string claude-sonnet-4-6. It’s the production default for most API workloads.

    How often does Anthropic release new Claude models?

    Anthropic releases major model generations every several months, with point releases more frequently. The pace has been accelerating — each year has brought multiple significant model updates across the Haiku, Sonnet, and Opus tiers.

    Need this set up for your team?
    Talk to Will →

  • Claude API Model Strings, IDs and Specs — Complete Reference (April 2026)

    Claude API Model Strings, IDs and Specs — Complete Reference (April 2026)

    Model Accuracy Note — Updated May 2026

    Current flagship: Claude Opus 4.7 (claude-opus-4-7). Current models: Opus 4.7 · Sonnet 4.6 · Haiku 4.5. Claude Opus 4.6 referenced in this article has been superseded. See current model tracker →

    Claude AI · Fitted Claude

    When you’re building on Claude via the API, you need the exact model string — not just the name. Anthropic uses specific model identifiers that change with each version, and using a deprecated string will break your application. This is the complete reference for Claude API model names, IDs, and specs as of April 2026.

    Quick reference: The current flagship models are claude-opus-4-7, claude-sonnet-4-6, and claude-haiku-4-5-20251001. Always use versioned model strings in production — never rely on alias strings that may point to different models over time.

    Current Claude API Model Strings (April 2026)

    Model API Model String Context Window Best for
    Claude Opus 4.6 claude-opus-4-7 1M tokens Complex reasoning, highest quality
    Claude Sonnet 4.6 claude-sonnet-4-6 1M tokens Production workloads, balanced cost/quality
    Claude Haiku 4.5 claude-haiku-4-5-20251001 200K tokens High-volume, latency-sensitive tasks

    Anthropic publishes the full, current list of model strings in their official models documentation. Always verify there before updating production systems — model strings are updated with each new release.

    How to Use Model Strings in an API Call

    import anthropic
    
    client = anthropic.Anthropic()
    
    message = client.messages.create(
        model="claude-sonnet-4-6",  # ← model string goes here
        max_tokens=1024,
        messages=[
            {"role": "user", "content": "Your prompt here"}
        ]
    )
    
    print(message.content)

    Model Selection: Which String to Use When

    The right model depends on your task requirements. Here’s the practical routing logic:

    Use Haiku (claude-haiku-4-5-20251001) when: you need speed and low cost at scale — classification, extraction, routing, metadata, high-volume pipelines where every call matters to your budget.

    Use Sonnet (claude-sonnet-4-6) when: you need solid quality across a wide range of tasks — content generation, analysis, coding, summarization. This is the right default for most production applications.

    Use Opus (claude-opus-4-7) when: the task genuinely requires maximum reasoning capability — complex multi-step logic, nuanced judgment, or work where output quality is the only variable that matters and cost is secondary.

    API Pricing by Model

    Model Input (per M tokens) Output (per M tokens)
    Claude Haiku ~$1.00 ~$5.00
    Claude Sonnet ~$3.00 ~$5.00
    Claude Opus ~$5.00 ~$25.00

    The Batch API offers roughly 50% off all rates for asynchronous workloads. For a full pricing breakdown, see Anthropic API Pricing: Every Model and Mode Explained.

    Important: Versioned Strings vs. Aliases

    Anthropic occasionally provides alias strings (like claude-sonnet-latest) that point to the current version of a model family. These are convenient for development but can create problems in production — when Anthropic updates the model the alias points to, your application silently starts using a different model without a code change. For production systems, always pin to a versioned model string and upgrade intentionally.

    Frequently Asked Questions

    What is the Claude API model string for Sonnet?

    The current Claude Sonnet model string is claude-sonnet-4-6. Always verify the current string in Anthropic’s official models documentation before deploying, as strings are updated with each new model release.

    How do I specify which Claude model to use in the API?

    Pass the model string in the model parameter of your API call. For example: model="claude-sonnet-4-6". The model string must match exactly — Anthropic’s API will return an error if the string is invalid or deprecated.

    What Claude API model should I use for production?

    Claude Sonnet is the right default for most production workloads — it balances quality and cost well across a wide range of tasks. Use Haiku when speed and cost are the priority at scale. Use Opus when the task genuinely requires maximum reasoning capability and cost is secondary.

    Need this set up for your team?
    Talk to Will →

    📎 Book for Bots — Free

    Take this article on steroids.

    The Claude Implementation Playbook is a dense 9-section PDF you can attach directly to any AI conversation — pricing tables, model API strings, routing logic, context engineering rules. Verified May 2026.

    Get Free PDF →

  • Claude Prompt Generator and Improver: Templates That Actually Work

    Claude Prompt Generator and Improver: Templates That Actually Work

    Claude AI · Fitted Claude

    Getting consistently good output from Claude isn’t about luck — it’s about prompt structure. This page covers two distinct needs: generating effective Claude prompts from scratch when you’re not sure how to start, and improving prompts that are working but producing mediocre results. Both skills are worth building deliberately.

    The core principle: Claude responds to specificity, context, and clear success criteria. The most common prompt failure is being too vague about what a good output looks like. The fixes are consistent once you know the patterns.

    How to Generate a Strong Claude Prompt

    If you’re starting from scratch and don’t know how to phrase your prompt, use this structure:

    [Role] You are [describe the expertise or perspective Claude should bring].

    [Task] I need you to [specific action verb] [specific output].

    [Context] Here’s the relevant background: [what Claude needs to know].

    [Constraints] Requirements: [format, length, tone, things to avoid].

    [Success criteria] A good output will [what done looks like].

    Not every prompt needs all five elements — a simple factual question doesn’t need a role or constraints. But for any substantive task, filling in these slots dramatically improves output quality.

    Claude Prompt Generator: Task-by-Task Templates

    Writing and Content

    Write a [article/email/report] about [topic] for [audience]. Tone: [professional/conversational/technical]. Length: approximately [X] words. Include: [specific sections or elements]. Avoid: [generic AI patterns, filler phrases, passive voice]. A good output will read as if written by a subject matter expert who has strong opinions.

    Analysis and Research

    Analyze [topic/document/data] and tell me [specific question]. Structure your response as: [1. Key finding, 2. Supporting evidence, 3. Implications, 4. What I should do about it]. Flag any areas where you’re uncertain or where I should verify your analysis.

    Coding

    Write a [language] function/script that [does X]. It receives [inputs] and returns [outputs]. Requirements: [error handling, logging, specific libraries]. Don’t use [specific patterns or libraries to avoid]. Include comments explaining non-obvious logic. Show me the complete working code, not pseudocode.

    Strategy and Decision-Making

    I’m deciding between [Option A] and [Option B]. Context: [relevant background]. My priorities are: [ranked list]. Constraints: [time, budget, resources]. Give me your honest assessment — including the risks in each option and what you’d actually recommend, not a balanced “here are both sides” non-answer.

    How to Improve a Prompt That’s Not Working

    If you’re getting mediocre output, diagnose the problem first. Most weak prompts fail for one of these reasons:

    Problem What you got The fix
    Too vague Generic output that could apply to anyone Add your specific context, audience, and use case
    No format specified Wrong structure for your needs Specify exactly how output should be organized
    No success criteria Output is fine but not quite right Describe what “done” looks like explicitly
    No constraints Output violates preferences you didn’t state Add what to avoid, not just what to include
    Wrong framing Claude answered a different question than you meant Restate from the end goal, not the mechanism

    The Prompt Improver: A Meta-Prompt

    If you have a prompt that’s underperforming, paste it to Claude with this wrapper:

    Here’s a prompt I’ve been using that isn’t producing the results I want:

    [PASTE YOUR PROMPT]

    The problem with what I’m getting: [describe what’s wrong].
    What I actually need: [describe the ideal output].

    Rewrite the prompt to fix these issues. Then show me what the improved version produces.

    Claude is good at prompt engineering — asking it to improve its own instructions is a legitimate technique and often produces better results faster than iterating yourself.

    Advanced Techniques

    Chain of thought: For complex reasoning tasks, add “Think through this step by step before giving me your answer.” This consistently improves accuracy on problems that require multi-step logic.

    Negative constraints: Telling Claude what not to do is as important as what to do. “Don’t use bullet points,” “don’t start with ‘certainly’,” “don’t hedge every claim” — these improve output quality significantly for writing tasks.

    Examples: If you have a sample of the output quality or format you want, include it. “Write in the style of this example: [example]” is more precise than any tonal description.

    Iteration permission: End complex prompts with “If you need clarification before proceeding, ask me — don’t guess.” Claude will often ask a clarifying question that improves the output dramatically.

    For a library of pre-built prompts across common professional use cases, see the Claude Prompt Library.

    Frequently Asked Questions

    How do I generate better prompts for Claude?

    Use the five-element structure: role, task, context, constraints, success criteria. The most important element most people skip is success criteria — describing what a good output looks like forces clarity that improves results immediately.

    Can Claude improve its own prompts?

    Yes. Paste your underperforming prompt to Claude, describe what’s wrong with the output, and ask it to rewrite the prompt. This meta-prompt technique is effective and often faster than manual iteration.

    What is the most common prompt mistake?

    Being vague about what a good output looks like. Most prompts tell Claude what to do but don’t describe what done looks like. Adding explicit success criteria — even a sentence — consistently improves output quality.

    Does Claude respond better to longer or shorter prompts?

    Longer prompts with more context consistently outperform shorter ones for complex tasks. Claude uses everything you give it. For simple factual questions, a short prompt is fine. For substantive work, more specific context produces better results — there’s no penalty for giving Claude more to work with.

    Need this set up for your team?
    Talk to Will →

  • Claude vs ChatGPT for Coding: Which Is Actually Better in 2026?

    Claude vs ChatGPT for Coding: Which Is Actually Better in 2026?

    Claude AI · Fitted Claude

    Coding is one of the highest-stakes comparisons between Claude and ChatGPT — because the wrong choice costs you real time on real work. I’ve used both extensively across content pipelines, GCP infrastructure, WordPress automation, and agentic development workflows. Here’s the honest breakdown of where each model wins for coding tasks in 2026.

    Short answer: Claude wins for complex multi-file work, long-context debugging, following precise coding instructions, and agentic development. ChatGPT wins for interactive data analysis and its code interpreter sandbox. For most professional development work, Claude is the stronger tool — especially if you’re using Claude Code for autonomous operations.

    Head-to-Head: Claude vs ChatGPT for Coding

    Task Claude ChatGPT Notes
    Complex instruction following ✅ Wins Holds all constraints through long outputs
    Large codebase context ✅ Wins Better coherence across long context windows
    Agentic coding ✅ Wins Claude Code operates autonomously in real codebases
    Interactive data analysis ✅ Wins ChatGPT’s code interpreter runs Python in-chat
    Code generation (routine) ✅ Strong ✅ Strong Both excellent for standard patterns
    Debugging unfamiliar code ✅ Stronger ✅ Strong Claude finds non-obvious errors more consistently
    API and infrastructure work ✅ Stronger ✅ Good Claude handles GCP, WP REST API, complex auth well

    Where Claude Wins for Coding

    Multi-Step, Multi-File Work

    When a task involves understanding several files, maintaining state across a long conversation, and producing a coordinated set of changes — Claude holds together more reliably. ChatGPT tends to lose track of earlier constraints as context length grows. For any real development task that spans more than a few exchanges, this matters.

    Precise Instruction Following

    I regularly give Claude detailed coding specs — exact naming conventions, specific file structures, error handling requirements, style preferences — and it holds them consistently through long outputs. ChatGPT is more likely to quietly drift from a constraint partway through. For production code where specifics matter, Claude’s adherence is meaningfully better.

    Claude Code: The Agentic Advantage

    Claude Code is a terminal-native agent that operates autonomously inside your actual codebase — reading files, writing code, running tests, managing Git. ChatGPT doesn’t have a direct equivalent at this level of system integration. For developers who want AI working inside their development environment rather than in a chat window, Claude Code is a qualitatively different capability. See Claude Code pricing for tier details.

    Debugging Complex Systems

    On non-obvious bugs — the kind where the error message points you somewhere unhelpful — Claude is more likely to trace the actual root cause. It’s more willing to say “this looks like it’s actually caused by X upstream” rather than addressing the symptom. That’s the kind of reasoning that saves hours.

    Where ChatGPT Wins for Coding

    Interactive Data Analysis

    ChatGPT’s code interpreter runs Python directly in the chat interface — you can upload a CSV, ask it to analyze and plot the data, and get a chart back in the same conversation. Claude can reason deeply about data, but doesn’t run code interactively in the web interface by default. For exploratory data analysis and visualization, ChatGPT’s sandbox is more convenient.

    OpenAI Ecosystem Integration

    If you’re building on OpenAI’s stack — using their APIs, their assistants, their function calling — ChatGPT has naturally more fluent knowledge of those specific systems. Claude is excellent at reasoning about OpenAI’s APIs, but it’s not Anthropic’s infrastructure, so edge cases in OpenAI-specific implementation details may hit limits.

    For Most Developers: Claude Is the Stronger Tool

    The cases where ChatGPT wins for coding are specific and bounded — primarily data analysis and OpenAI ecosystem work. For the broader range of professional development: backend logic, API integration, infrastructure, automation, debugging, architecture decisions — Claude’s instruction-following, long-context coherence, and agentic capabilities through Claude Code give it a consistent edge.

    For a broader comparison beyond coding, see Claude vs ChatGPT: The Full 2026 Comparison. For Claude’s agentic coding tool specifically, see Claude Code vs Windsurf.

    Frequently Asked Questions

    Is Claude better than ChatGPT for coding?

    For most professional coding tasks — complex instruction following, large codebase work, debugging, and agentic development — Claude is stronger. ChatGPT’s code interpreter wins for interactive data analysis. Overall, Claude is the better coding tool for most developers.

    What is Claude Code and how does it compare to ChatGPT?

    Claude Code is a terminal-native agentic coding tool that operates autonomously inside your actual codebase — reading files, writing code, running tests. ChatGPT doesn’t have a direct equivalent at this level of system integration. It’s a qualitatively different capability, not just a better chat interface.

    Can ChatGPT run code that Claude can’t?

    ChatGPT’s code interpreter runs Python interactively in the chat interface for data analysis and visualization. Claude doesn’t do this by default in the web interface. However, Claude Code can execute code autonomously inside a real development environment, which is a different and more powerful capability for actual software development.

    Need this set up for your team?
    Talk to Will →

  • Is Claude Better Than ChatGPT? An Honest Answer From Daily Use

    Is Claude Better Than ChatGPT? An Honest Answer From Daily Use

    Claude AI · Fitted Claude

    I’ve used both Claude and ChatGPT daily for over a year — running content pipelines, building automations, writing strategy documents, debugging code, and doing client work across more than two dozen sites. The honest answer to “is Claude better than ChatGPT?” is: it depends on exactly what you’re doing. But for most professional knowledge work, yes — Claude is better. Here’s why, and where it isn’t.

    Bottom line: Claude wins on writing quality, instruction-following, long-context work, and nuanced reasoning. ChatGPT wins on third-party integrations, image generation, and ecosystem breadth. If you’re a knowledge worker who writes, analyzes, or builds with AI — Claude is the better daily driver. If you need DALL-E, GPT plugins, or deep OpenAI ecosystem integration, ChatGPT holds the advantage there.

    Where Claude Is Better Than ChatGPT

    Writing Quality

    Claude produces more natural, less formulaic prose. ChatGPT has a tell — a certain cadence and structure that shows up in its outputs even when you try to tune it away. Claude is more likely to match your actual voice if you give it examples, and less likely to default to a listicle structure when that’s not what the task calls for. For any serious writing work — articles, client deliverables, strategy documents — Claude is noticeably better out of the box.

    Following Complex Instructions

    This is where Claude separates itself most clearly. Give both models a prompt with eight specific constraints and Claude will hold all eight through a long response. ChatGPT tends to lose track of earlier constraints as the response develops — not always, but often enough to be a real workflow problem. For systems work, content pipelines, or anything with precise formatting requirements, Claude’s instruction adherence is meaningfully better.

    Long-Context Work

    Claude handles large documents better. Load a 50-page PDF, a full codebase, or a lengthy conversation history and Claude maintains coherence across the whole context. It’s less likely to “forget” what was established earlier in the session. For research synthesis, document analysis, or any task requiring sustained attention across long inputs, Claude has a consistent edge.

    Honesty and Calibration

    Claude is more likely to tell you when it’s uncertain, push back on a bad premise, or flag a potential problem with your approach. ChatGPT skews more agreeable — which feels pleasant in the moment but can leave you with confident-sounding wrong answers. For professional work where accurate information matters, Claude’s willingness to express uncertainty is a feature, not a limitation.

    Where ChatGPT Is Better Than Claude

    Image Generation

    ChatGPT includes DALL-E image generation in the standard subscription. Claude doesn’t generate images natively in the web interface (though Anthropic’s models support image generation via the API through Vertex AI). If visual content creation is part of your workflow, this is a real gap.

    Third-Party Integrations

    ChatGPT has a broader plugin and integration ecosystem, particularly for consumer apps and popular productivity tools. If you need Claude to connect to a specific third-party service, Claude’s MCP (Model Context Protocol) integration is expanding rapidly — but the ChatGPT ecosystem currently has more established connections across more platforms.

    Code Interpreter

    ChatGPT’s code execution environment is more developed for data analysis use cases — running Python, generating charts, analyzing spreadsheets interactively. Claude can reason about code and data at a high level, and Claude Code handles real agentic development work, but ChatGPT’s in-chat data analysis sandbox has been more polished for that specific use case.

    The Tasks Where It’s Essentially a Tie

    Both models are excellent at: answering factual questions, explaining concepts, brainstorming, summarizing content, generating structured data formats, and basic coding assistance. For simple, well-defined tasks, the difference between Claude and ChatGPT in 2026 is marginal. The gap shows up on harder, more nuanced work.

    Price Comparison

    Tier Claude ChatGPT
    Free ✓ (limited) ✓ (limited)
    Standard paid Pro $20/mo Plus $20/mo
    Power user Max $100/mo No direct equivalent
    Team $30/user/mo $30/user/mo
    Image generation Not included DALL-E included

    For a full breakdown of Claude’s plans, see the complete Claude pricing guide. For a detailed side-by-side, see Claude vs ChatGPT: The Full 2026 Comparison.

    My Actual Setup

    I use Claude as my primary AI — it’s where I do all serious writing, strategy work, and multi-step operations. I occasionally use ChatGPT when a specific integration requires it or when I need image generation for a quick prototype. That’s the honest answer from someone who has both subscriptions and uses them daily.

    Frequently Asked Questions

    Is Claude better than ChatGPT for writing?

    Yes, for most professional writing tasks. Claude produces more natural prose, follows formatting and style instructions more precisely, and is less likely to default to generic AI-sounding patterns. For knowledge workers whose output is primarily written, Claude is the stronger tool.

    Is Claude better than ChatGPT for coding?

    Claude is stronger on complex instruction-following and long-context code tasks. ChatGPT’s in-chat code interpreter is better for interactive data analysis. For agentic coding — running autonomously inside a codebase — Claude Code has a distinct advantage. For most code generation and debugging, they’re closely matched with Claude edging ahead on nuanced problems.

    Should I switch from ChatGPT to Claude?

    If your primary work is writing, analysis, research, or building with AI, yes — Claude is the better daily driver for those tasks. If you rely heavily on DALL-E image generation, ChatGPT’s plugin ecosystem, or specific OpenAI integrations, switching entirely would cost you those capabilities. Many professionals use both.

    Can I use Claude for free?

    Yes. Claude has a free tier with daily usage limits. For details on what the free tier includes and when it makes sense to upgrade, see Is Claude Free? What You Actually Get.

    Need this set up for your team?
    Talk to Will →

  • Claude Opus vs Sonnet: Which Model Should You Actually Use?

    Claude Opus vs Sonnet: Which Model Should You Actually Use?

    Claude AI · Fitted Claude

    Claude Opus and Claude Sonnet are both powerful — but they’re built for different jobs. Picking the wrong one either wastes money or leaves capability on the table. Here’s the practical breakdown of when each model wins, what the actual performance differences look like, and which one belongs in your default workflow.

    Quick answer: Sonnet is the right default for most people. It handles the vast majority of real-world tasks — writing, analysis, coding, research — with excellent output at a fraction of Opus’s cost. Opus is for the tasks where you need the absolute ceiling of Claude’s reasoning capability: complex multi-step problems, nuanced judgment calls, or work where quality is genuinely the only variable that matters.

    Claude Opus vs Sonnet: Head-to-Head

    Category Sonnet Opus Notes
    Speed ✅ Faster Noticeably quicker on long outputs
    API cost ✅ Much cheaper Opus input tokens cost ~5× more than Sonnet
    Complex reasoning ✅ Wins Multi-step logic, edge cases, ambiguous problems
    Long-form writing ✅ Strong ✅ Stronger Opus has more nuance; Sonnet covers most needs
    Coding ✅ Strong ✅ Stronger Opus catches edge cases Sonnet misses
    Instruction following ✅ Excellent ✅ Excellent Both handle complex instructions well
    Daily use value ✅ Better ratio Cost-per-task is dramatically lower

    Where Sonnet Wins

    Sonnet is not a compromise — it’s the right tool for the majority of professional tasks. Writing, research, summarization, drafting, analysis, code generation, SEO work, email, strategy — Sonnet handles all of it at a level that’s indistinguishable from Opus for most outputs. The difference shows up at the edges: highly ambiguous problems, tasks requiring multiple competing constraints to be held simultaneously, or situations where the consequences of a slightly wrong answer are significant.

    For production API workloads, Sonnet’s cost advantage is substantial. Running high-volume content or data pipelines on Opus instead of Sonnet multiplies costs without proportional quality gains on most tasks.

    Where Opus Wins

    Opus earns its premium on genuinely hard problems. Complex multi-step reasoning where the chain of logic matters. Legal or technical documents where precision at every sentence is required. Strategic analysis where you need the model to hold and weigh competing frameworks simultaneously. Code debugging on complex, unfamiliar systems where Sonnet gives you the obvious answer and Opus finds the non-obvious one.

    I use Opus specifically for: client strategy documents where I’m synthesizing months of context, complex GCP architecture decisions, and any task where I’ve tried Sonnet and felt the output was a notch below what the problem deserved. That’s a smaller subset of work than most people assume.

    What About Haiku?

    Haiku is the third model in the family — faster and cheaper than Sonnet, designed for high-volume tasks where speed and cost dominate. Classification, extraction, routing logic, metadata generation, short-form responses. If Sonnet is your default, Haiku is the model you reach for when you need to run the same operation across hundreds or thousands of inputs cost-effectively.

    For a full model comparison including Haiku, see Claude Models Explained: Haiku vs Sonnet vs Opus.

    The Practical Routing Rule

    Use Sonnet when: the task is well-defined, the output type is familiar, and quality at the 90th percentile is sufficient. That’s most professional work.

    Use Opus when: the task is genuinely novel, involves high-stakes judgment, requires deep multi-step reasoning, or you’ve already run it on Sonnet and the output wasn’t quite right.

    Use Haiku when: you need the same operation at scale, latency matters more than depth, or cost is the primary constraint.

    Frequently Asked Questions

    Is Claude Opus better than Sonnet?

    Opus is more capable on complex reasoning tasks, but Sonnet delivers excellent results on the vast majority of professional work. For most users, Sonnet is the right default — Opus is worth reaching for when a task is genuinely hard and quality is the only variable that matters.

    How much more expensive is Opus than Sonnet?

    Opus input tokens cost approximately $5 per million compared to Sonnet’s approximately $3 per million — approximately 1.7× more expensive on input (Opus is $5/M vs Sonnet’s $3/M). Output tokens follow a similar ratio. For API workloads, this cost difference is significant at scale.

    Which Claude model should I use by default?

    Sonnet is the right default for most people. It handles writing, analysis, coding, research, and strategy work with excellent quality. Upgrade to Opus when you’ve tried Sonnet on a task and the output wasn’t quite at the level the problem required.

    Does Claude Pro give access to both Opus and Sonnet?

    Yes. Claude Pro ($20/month) includes access to Haiku, Sonnet, and Opus. You can switch between models within the web interface. The subscription doesn’t limit which model you use — it limits total usage volume across all models.

    Need this set up for your team?
    Talk to Will →

  • What UCP Teaches Us About RCP: How Open Protocols Create Industry Movements

    What UCP Teaches Us About RCP: How Open Protocols Create Industry Movements

    Tygart Media Strategy
    Volume Ⅰ · Issue 04Quarterly Position
    By Will Tygart
    Long-form Position
    Practitioner-grade

    When Google launched the Universal Commerce Protocol at NRF in January 2026, the announcement was framed as an e-commerce story. Shopify, Walmart, Target, Visa — merchants and payment processors getting their systems ready for AI agents that shop, compare, and execute purchases without human intervention. That framing is correct but incomplete. UCP is not just a commerce standard. It is a template for how open protocols create movements.

    The Restoration Carbon Protocol is a different kind of standard in a completely different industry. But when you understand what UCP actually does architecturally — and why it succeeded where dozens of previous e-commerce APIs failed — you start to see exactly how RCP gets from a 31-article framework on tygartmedia.com to an industry-wide adopted standard that BOMA, IFMA, and institutional ESG reporters actually depend on.

    The mechanism is the same. The domain is different. And there is a version two of RCP that plugs directly into the UCP trust architecture — if the restoration industry moves in the next 18 months.


    What UCP Actually Does That Previous Commerce APIs Didn’t

    The history of e-commerce is littered with failed attempts at standardization. Every major platform — Amazon, eBay, Shopify, Magento — built its own API. Merchants implemented each one separately. Integrators spent years building custom connectors. The problem was not technical. The problem was trust and authentication. Every API required a bilateral relationship: the merchant trusted this specific buyer’s agent, that agent trusted this specific merchant’s data. Scaling to the open web required n² trust relationships. It never worked.

    UCP solved this with a different architecture. Instead of bilateral trust, it established a protocol layer — a shared standard that any compliant agent and any compliant merchant can speak without a pre-existing relationship. An AI agent that implements UCP can query any UCP-compliant catalog, check any UCP-compliant inventory, and execute against any UCP-compliant checkout — not because it has a relationship with that merchant, but because both parties speak the same authenticated protocol.

    The authentication is the product. UCP’s standardized interface means that a merchant’s decision to implement the protocol is simultaneously a decision to trust any UCP-authenticated agent. The trust is embedded in the standard, not in the bilateral relationship.

    Google’s Agent Payments Protocol (AP2), which sits alongside UCP, formalized this with “mandates” — digitally signed statements that define exactly what an agent is authorized to do and spend. The mandate is the credential. Any merchant who accepts UCP mandates accepts a verifiable statement of agent authorization without knowing anything specific about the agent that issued it.

    That architecture — open protocol, embedded authentication, mandate-based trust — is exactly what the restoration industry needs for Scope 3 emissions data. And RCP v1.0 has already built the content layer. The question for v2 is whether to build the authentication layer.


    The RCP Authentication Problem (That UCP Already Solved)

    RCP v1.0 produces per-job emissions records — JSON-structured Job Carbon Reports that restoration contractors deliver to commercial property clients for their GRESB, SBTi, and SB 253 reporting. The framework is solid. The methodology is sourced and auditable. The schema is machine-readable.

    But right now, there is no authentication layer. A property manager who receives an RCP Job Carbon Report from a contractor has no way to verify that the contractor actually follows the methodology, uses the current emission factors, or has gone through any validation process. They have to trust the contractor’s word — which is exactly the problem that makes Scope 3 data from supply chains unreliable for ESG auditors.

    This is the bilateral trust problem all over again. The property manager trusts this specific contractor’s data. That contractor trusts this specific property manager’s reporting process. It does not scale to a portfolio of 200 contractors across 800 properties.

    UCP solved the equivalent problem in commerce. The RCP organization — whoever formally governs the standard — can solve the same problem in ESG supply chain reporting with an analogous architecture.


    What RCP Certification Could Look Like in a UCP-Style Architecture

    Imagine a restoration contractor completes an RCP certification process. They demonstrate that they collect the 12 required data points, apply the current emission factors, produce Job Carbon Reports in the RCP-JCR-1.0 schema, and maintain source documents for seven years. The RCP organization validates this and issues a cryptographically signed certification credential — an RCP Mandate.

    The RCP Mandate is the contractor’s credential. It is not issued to a specific property manager. It is not dependent on a bilateral relationship. It is a verifiable statement, signed by the RCP authority, that this contractor’s emissions data meets the methodology standard. Any property manager, ESG platform, or auditor who accepts RCP Mandates can trust the data from any RCP-certified contractor — not because they know that contractor, but because the standard’s authentication is embedded in the credential.

    This is precisely how UCP mandates work in commerce. The signed statement creates protocol-level trust that does not require a pre-existing relationship.

    The downstream effects are the same as in commerce:

    • For contractors: RCP certification becomes a competitive signal that travels with the data. An RCP Mandate delivered with a Job Carbon Report tells the property manager’s ESG team: this data does not need to be validated separately. It has already been validated by a recognized standard.
    • For property managers: They can accept RCP-certified contractor data directly into their ESG reporting workflows without manual review. The certification is the audit trail. Measurabl, Yardi Elevate, and Deepki — the ESG data management platforms most of them use — can be built to accept RCP Mandate credentials alongside RCP JSON records and flag them automatically as verified-methodology data.
    • For ESG auditors: A property portfolio where all restoration contractor data comes from RCP-certified vendors is auditable without going back to each contractor. The mandate chain is the evidence. Limited assurance under CSRD or SB 253 becomes a single check — are these vendors RCP-certified? — rather than a vendor-by-vendor methodology review.
    • For the industry: Certification creates a selection mechanism. Property managers who require RCP-certified vendors in their preferred contractor agreements are no longer asking for a one-off document. They are asking for protocol compliance — the same way a merchant asking for UCP compliance is not asking for a custom integration, they are asking for standards adoption.

    The Protocol Stack for RCP v2

    Following the UCP architecture model, a complete RCP v2 would have three layers — matching the commerce, payments, and infrastructure layers of the agentic commerce stack:

    Layer 1: The Data Layer (Already Built — RCP v1.0)

    The methodology, emission factors, JSON schema, five job type guides, audit readiness documentation, and public API. This is the equivalent of UCP’s catalog query and inventory check layer — the standardized interface for what data is produced and how it is structured. RCP v1.0 is complete at this layer.

    Layer 2: The Authentication Layer (RCP v2 Target)

    The certification program, the mandate credential, the verification mechanism. This is the equivalent of UCP’s trust and authentication architecture — the layer that makes data from one party trusted by another without a bilateral relationship. Key components:

    • RCP Contractor Certification: documented audit of data capture practices, schema compliance, emission factor vintage, and source document retention
    • RCP Mandate: cryptographically signed certification credential, issued per contractor, versioned to the RCP release used, with an expiration and renewal cycle
    • Mandate verification endpoint: a public API (building on the existing tygart/v1/rcp namespace) where any platform can POST a mandate token and receive a verified/not-verified response with credential metadata
    • Certified contractor registry: a public directory of RCP-certified organizations, queryable by name, state, and certification status

    Layer 3: The Infrastructure Layer (RCP v2 Target)

    The machine-to-machine data exchange infrastructure — the equivalent of MCP and A2A in the agentic commerce stack. A contractor’s job management system (Encircle, PSA, Dash, Xcelerate) that natively implements RCP can transmit certified Job Carbon Reports directly to a property manager’s ESG platform without human intermediation. The report travels with the mandate credential. The platform verifies the credential, ingests the data, and flags it as RCP-verified — automatically. No email, no manual upload, no data entry.

    This is what makes it a movement rather than a document standard. The data flows automatically between authenticated parties. The human steps are eliminated. The protocol becomes infrastructure.


    Why Open Protocol Architecture Enables Movements

    UCP didn’t succeed because Google built good documentation. It succeeded because Google made it open — any merchant can implement it, any agent can speak it, no license fee, no bilateral negotiation, no approval required. Shopify and a regional boutique retailer are equal participants in the UCP ecosystem because the protocol is the credential, not the relationship with Google.

    That openness is what creates network effects. Every new UCP-compliant merchant makes the protocol more valuable for every agent. Every new UCP-compliant agent makes the protocol more valuable for every merchant. The standard grows because participation is self-reinforcing.

    RCP v1.0 is already open. The framework is CC BY 4.0 — free to use, implement, and build upon. The API is public. The emission factors are published with sources. Any restoration company can implement it today without permission.

    What RCP v2 adds is the authentication layer that makes open participation verifiable. The difference between “any company claims to follow RCP” and “any company can prove they follow RCP” is the difference between a document standard and a protocol. And the difference between a protocol and a movement is whether the infrastructure layer — the machine-to-machine data exchange — gets built.

    The agentic commerce stack took 18 months from UCP’s launch to meaningful adoption in production commerce systems. The RCP timeline is not 18 months from today — it’s 18 months from the moment RIA, IICRC, or a major industry insurer formally endorses the standard. That endorsement is the equivalent of Shopify and Walmart signing on to UCP at NRF. It’s the signal that tells the rest of the ecosystem: this is the standard, build to it.


    The Restoration Industry’s Unique Position

    BOMA and IFMA are working the problem from the property owner side — how do we get our vendor supply chains to report Scope 3 data? They don’t have the answer because the answer requires contractor-side infrastructure that commercial real estate organizations cannot build. They can mandate data. They cannot build the methodology.

    The restoration industry can. The 12 data points are already defined. The five job type methodologies are already published. The JSON schema is live. The API is running. The audit readiness guide exists. The only missing component is the formal certification program and the mandate credential that makes all of it protocol-grade rather than document-grade.

    This is what positions restoration as the leading industry in commercial property Scope 3 compliance — not just a participant but the infrastructure provider. The industry that built the standard that the property management industry depends on. That is a fundamentally different value proposition than “we report our emissions.”

    The parallel to UCP is exact: Google didn’t just participate in e-commerce. They built the protocol layer that made agentic commerce possible at scale. The restoration industry, through RCP, can build the protocol layer that makes supply chain Scope 3 compliance possible at scale for commercial real estate. And unlike Google, the restoration industry doesn’t need to be invited to the table. The table was already set at tygartmedia.com/rcp.


    What RIA Savannah Should Start

    The conversation at RIA Savannah on April 27 isn’t about persuading the industry to care about carbon. It’s about presenting the infrastructure that already exists and asking whether the industry wants to formally govern it. The RCP v1.0 framework, the public API, the certification roadmap — these are things that exist today. The question for RIA leadership is whether they want the restoration industry to own the protocol layer for commercial property Scope 3 compliance, or whether they want to watch a property management trade association or a Canadian software company build something proprietary in their place.

    The window is real. ESG data platforms are making vendor integration decisions now. Property managers are establishing preferred contractor Scope 3 requirements now. California SB 253’s Scope 3 deadline is 2027. GRESB assessments with contractor data coverage scoring are active this year. The infrastructure moment is not coming. It is here.

    A movement needs three things: an open standard, an authentication layer, and a network effect. RCP v1.0 is the standard. The authentication layer is the RCP v2 roadmap. The network effect starts the moment an industry organization formally endorses the protocol and restoration contractors have a reason to get certified rather than merely compliant.

    That is what UCP teaches us about RCP. The protocol is not the product. The authenticated, machine-readable, verifiable data infrastructure that emerges from the protocol is the product. And the industry that builds that infrastructure owns the category.

  • Claude Code: The Complete Beginner’s Guide for 2026

    Claude Code: The Complete Beginner’s Guide for 2026

    Claude AI · Fitted Claude

    Claude Code is the fastest-growing AI coding tool in the developer community. The r/ClaudeCode subreddit has 4,200+ weekly contributors — roughly 3x larger than r/Codex. Anthropic reports $2.5B+ in annualized revenue attributable to Claude Code adoption. This complete guide takes you from installation to your first productive agentic coding session.

    What Is Claude Code?

    Claude Code is a terminal-native AI coding tool from Anthropic. Unlike IDE plugins that assist line-by-line, Claude Code operates at the project level — it reads your entire codebase, understands the architecture, writes and edits multiple files in a single session, runs tests, and works through complex engineering tasks autonomously. It uses Claude models with a 1-million-token context window — large enough to hold an entire codebase in memory.

    Installation

    Requirements: Node.js 18+, a Claude Max subscription ($100+/month) or Anthropic API key.

    # Install globally
    npm install -g @anthropic-ai/claude-code
    
    # Navigate to your project
    cd your-project
    
    # Authenticate
    claude login
    
    # Start a session
    claude

    Setting Up CLAUDE.md (The Most Important Step)

    CLAUDE.md is a file you create in your project root that Claude Code reads at the start of every session. It’s the most important setup step — it gives Claude the context it needs to work effectively in your specific codebase without you re-explaining everything every time.

    A good CLAUDE.md includes:

    # Project: [Your Project Name]
    
    ## Architecture
    [Brief description of how the codebase is organized]
    
    ## Tech Stack
    - Language: [Python 3.11 / Node.js 20 / etc.]
    - Framework: [Django / Next.js / etc.]
    - Database: [PostgreSQL / MongoDB / etc.]
    - Testing: [pytest / Jest / etc.]
    
    ## Coding Standards
    - [Style guide, naming conventions, etc.]
    - [Preferred patterns for this codebase]
    
    ## Common Tasks
    - Run tests: `[command]`
    - Start dev server: `[command]`
    - Lint: `[command]`
    
    ## Known Issues / Context
    - [Anything Claude should know before working]

    Key Slash Commands

    Command What It Does
    /init Scans your codebase and generates an initial CLAUDE.md
    /memory View and edit Claude’s memory for this project
    /compact Compact the conversation to free up context space
    /batch Run multiple commands or edits in one operation
    /clear Clear conversation history (start fresh)

    Your First Agentic Session

    Start Claude Code in your project directory and try:

    • “Explain the overall architecture of this codebase” — Claude reads and summarizes
    • “Add input validation to the user registration endpoint” — Claude finds the right file, writes the validation, updates tests
    • “There’s a bug where [describe issue] — find it and fix it” — Claude searches the codebase, identifies the cause, fixes it
    • “Write tests for [module or function]” — Claude reads the code and writes comprehensive tests

    Rate Limits and Token Management

    Claude Code on Max 5x gets approximately 44,000-220,000 tokens per 5-hour window. Long sessions with large codebases consume tokens quickly. Best practices:

    • Use /compact when sessions get long to free up context
    • Be specific in your requests — “fix the authentication bug in auth.py” uses fewer tokens than “look through all my files for problems”
    • Auto-compaction (beta) handles this automatically when enabled

    Frequently Asked Questions

    What subscription do I need for Claude Code?

    Claude Max at $100/month minimum. Claude Code can also be accessed via API billing — often more cost-effective for lower-volume use.

    Can Claude Code edit multiple files at once?

    Yes. Claude Code can read, edit, and create multiple files in a single session — and runs the edits atomically, so you can review and accept or reject changes.

    How do I install Claude Code on Windows?

    Claude Code requires Node.js 18+ and runs via WSL (Windows Subsystem for Linux) on Windows. Install WSL, then follow the standard npm installation steps within your WSL terminal.


    Need this set up for your team?
    Talk to Will →

  • Claude vs Amazon Q: Which AI Coding Assistant for AWS Developers?

    Claude vs Amazon Q: Which AI Coding Assistant for AWS Developers?

    Model Accuracy Note — Updated May 2026

    Current flagship: Claude Opus 4.7 (claude-opus-4-7). Current models: Opus 4.7 · Sonnet 4.6 · Haiku 4.5. Claude Opus 4.6 referenced in this article has been superseded. See current model tracker →

    Claude AI · Fitted Claude

    For AWS developers, Claude and Amazon Q represent two distinct approaches to AI-assisted development. Amazon Q is deeply integrated into the AWS ecosystem — built to understand your AWS environment, your IAM policies, your CloudFormation stacks, and your AWS-specific workflows. Claude is a more capable general-purpose AI that can handle complex reasoning and code but requires you to provide AWS context manually. This comparison helps you choose — and explains why many AWS developers use both.

    What Amazon Q Does Well

    • AWS-native context: Q can read your actual AWS account state — running resources, IAM permissions, CloudWatch logs — without you describing them
    • AWS documentation: Q is trained specifically on AWS documentation and gives more accurate, up-to-date answers for AWS-specific questions
    • Console integration: Q is embedded in the AWS Console, CloudShell, and VS Code via the AWS Toolkit — zero additional setup for AWS users
    • Troubleshooting: Q can analyze your actual CloudWatch errors and IAM policy conflicts directly
    • Cost optimization: Q analyzes your actual usage data for cost recommendations

    What Claude Does Better

    • Code quality: Claude Opus 4.6 scores 80.8% on SWE-bench vs Amazon Q’s lower published benchmarks — for complex, multi-file code generation, Claude produces better results
    • General reasoning: Architecture decisions, trade-off analysis, and complex problem-solving — Claude reasons more deeply
    • Non-AWS work: If you’re building multi-cloud or have significant non-AWS code, Claude handles everything equally; Q is heavily AWS-optimized
    • Document analysis: Claude’s 200K context window for reading technical specs, RFCs, or lengthy docs far exceeds Q’s capabilities
    • Writing: Technical blog posts, documentation, runbooks — Claude writes better

    Pricing Comparison

    Claude Amazon Q
    Individual $20-200/month $19/month (Q Developer Pro)
    Free tier Yes (limited) Yes (Q Developer Free)
    Business Custom $19/user/month

    Amazon Q Developer Pro at $19/month is competitive with Claude Pro at $20/month. For AWS-heavy developers, Q Pro includes features with no Claude equivalent (direct AWS account analysis). For general development, Claude holds the performance edge per dollar.

    The Combined Workflow

    Many AWS developers use Amazon Q for AWS-specific questions (CloudFormation troubleshooting, IAM policy analysis, service limits) and Claude Code for complex coding tasks (architecture, large refactors, code review). The tools are complementary rather than competing.

    Frequently Asked Questions

    Is Amazon Q better than Claude for AWS development?

    For AWS-native questions with real account context: Amazon Q wins. For complex code generation, architecture decisions, and general programming: Claude is stronger. Many AWS developers use both.

    Can Claude access my AWS account?

    Not directly. You can paste CloudFormation templates, error logs, or resource configurations into Claude for analysis. Amazon Q connects directly to your AWS account with appropriate permissions.


    Need this set up for your team?
    Talk to Will →

  • Is Claude AI Safe? Security, Ethics, and Trustworthiness Assessed

    Is Claude AI Safe? Security, Ethics, and Trustworthiness Assessed

    Claude AI · Fitted Claude

    Safety means different things depending on who’s asking. For a parent wondering if Claude is appropriate for their teenager: yes, with caveats. For an enterprise considering Claude for sensitive workflows: that requires a more detailed answer. For a researcher wondering about AI existential risk: that’s a different conversation entirely. This guide covers all three dimensions of Claude safety in 2026.

    Content Safety: What Claude Will and Won’t Do

    Claude’s content policies are enforced through Constitutional AI training, not just a filter layer bolted on afterward. This makes them more robust than keyword blocklists. Claude will decline to:

    • Generate content facilitating violence or illegal activities
    • Produce sexual content involving minors (zero tolerance, no exceptions)
    • Provide detailed instructions for creating weapons capable of mass casualties
    • Generate content designed to facilitate harassment or stalking of specific individuals

    Claude’s refusals are imperfect — it occasionally refuses legitimate requests and occasionally allows borderline ones. But the overall calibration has improved substantially with each model generation.

    Data Security

    Anthropic is a US-incorporated company subject to US law. Conversation data is stored on Anthropic’s infrastructure. Consumer accounts may be used for model training (opt-out available). Enterprise and API accounts have zero-data-retention options. Anthropic has published a privacy policy at privacy.claude.com and does not sell conversation data to third parties or advertisers.

    Anthropic’s Responsible Scaling Policy

    Anthropic has published a Responsible Scaling Policy (RSP) — a commitment to evaluate Claude models against specific safety thresholds before deployment. The RSP creates public accountability: if future Claude models show dangerous capability thresholds in evaluation, Anthropic has committed to not deploying them until additional safety measures are in place. This is a meaningful governance commitment uncommon among AI companies.

    Fake Claude Scams: What Every User Should Know

    Malwarebytes and other security researchers have documented phishing campaigns using fake “Claude AI” websites to steal credentials and install malware. Key indicators of legitimate Claude access:

    • The official Claude interface is at claude.ai — any other domain claiming to be Claude is not
    • Anthropic does not offer Claude through third-party websites requiring separate account creation
    • Claude’s API is accessed at api.anthropic.com
    • If you’re ever unsure, go directly to anthropic.com and navigate from there

    Frequently Asked Questions

    Is Claude safe for kids?

    Claude has content filters that prevent most inappropriate content, but it’s not specifically designed as a children’s product. Parental supervision is recommended for younger users. Claude doesn’t have age verification on the free tier.

    Can Claude be jailbroken?

    Attempts to manipulate Claude into ignoring its safety training exist. Anthropic actively works to patch these. Claude is more robust against jailbreaking than most models, but no AI system is perfectly immune to sophisticated manipulation attempts.


    Need this set up for your team?
    Talk to Will →