Category: Claude Code MCP Integrations

  • Claude MCP in 2026: What Actually Changed and How to Configure It Without Wasting Tokens

    Claude MCP in 2026: What Actually Changed and How to Configure It Without Wasting Tokens

    If you set up Claude MCP six months ago and have not touched the config since, three things have changed underneath you: the recommended transport, how tools are loaded into context, and how teams share server configs. None of these are cosmetic. If you ignore them, you are leaving tokens, money, and stability on the table.

    This is the working Claude MCP setup I use in May 2026 — what the claude mcp add command actually does, which scope to pick, what the deprecation of SSE means in practice, and where Claude Code still falls short.

    The three-scope mental model

    Every MCP server you wire into Claude Code lives at exactly one of three scopes. Get this wrong and you will either leak credentials into git or wonder why your teammate cannot use the same database the AI just queried.

    • Local (default): the server is available only to you, only inside the current project. Config is written into your project’s entry inside ~/.claude.json. Good for project-specific servers like a dev database or a Sentry project key you do not want other repos to inherit.
    • User: the server is available to you across every project on your machine. Also stored in ~/.claude.json. This is where GitHub, search providers, and personal productivity servers belong.
    • Project: the server is written to a .mcp.json file at the repo root and shared with the whole team via git. Claude Code prompts for approval the first time a teammate opens the project — by design, because anyone who can push to the repo can wire a new server into your environment.

    When the same server is defined in more than one scope, Claude Code resolves it in this order: local beats project beats user beats plugin-provided. This is the part that bites people the most. If you have a “github” entry at user scope and someone adds a different “github” entry at project scope in .mcp.json, the project definition wins for that repo. Run claude mcp list when something behaves strangely.

    The commands you actually need

    The CLI is more useful than the docs make it look. Three commands cover ~90% of real setup work:

    # Add a remote HTTP MCP server at user scope (available everywhere)
    claude mcp add --transport http hubspot --scope user https://mcp.hubspot.com/anthropic
    
    # Add a local stdio server scoped only to this project
    claude mcp add my-db -s local -- node ./scripts/db-mcp.js
    
    # Share a server with your team via the repo's .mcp.json
    claude mcp add my-server -s project -- node server.js

    The short flag is -s, the long is --scope. The -- separator is required for stdio servers because everything after it is treated as the literal command to spawn. Forget it and Claude Code will try to interpret your Node arguments as its own flags.

    SSE is dead. Use Streamable HTTP.

    If your MCP server documentation still tells you to use the sse transport, the documentation is stale. The MCP spec dated 2025-03-26 introduced Streamable HTTP and simultaneously deprecated HTTP+SSE. Through 2026, vendor after vendor has set hard cutoff dates — Atlassian’s Rovo MCP server keeps SSE around until June 30, 2026 and then drops it; Keboola pulled SSE on April 1; Cumulocity’s AI Agent Manager flipped to Streamable HTTP on May 8.

    Why this matters beyond a name change: SSE required Claude Code to hold a persistent connection to a single server replica, which broke horizontal scaling and made every transient network blip a reconnection drama. Streamable HTTP is stateless. Multiple replicas behind a load balancer just work. If you have flaky MCP connections in production, the first thing to check is whether the server is still on SSE.

    For new setups, use --transport http. The older --transport sse still functions but is on the deprecation path.

    Tool Search is the feature you should actually care about

    The single biggest change in how Claude Code uses MCP in 2026 is lazy tool loading via Tool Search. Older MCP clients dumped every tool schema from every connected server into the model’s context window at the start of every conversation. With ten servers wired up that could easily be 20,000+ tokens of overhead before you typed a single character.

    Tool Search inverts this. Claude Code keeps only the server names and short descriptions resident. When a tool is actually needed, it fetches that tool’s full schema on demand. Anthropic’s own documentation says this reduces tool-definition context usage by roughly 95% versus eager-loading clients. In practice that means you can run a serious MCP fleet — GitHub, Sentry, a database, a search provider, your internal API — without quietly burning through your context budget. The Sonnet 4.6 and Opus 4.7 1M-token context window does not save you here, because anything you let crowd the prompt is also being re-read on every turn.

    Companion feature: list_changed notifications. An MCP server can now tell Claude Code “my tool list changed” and Claude Code refreshes capabilities without a disconnect-reconnect dance. If you build your own server, emit this when you swap tool definitions and you save users a restart.

    What it still gets wrong

    Honest take: claude mcp list still does not surface scope information for every entry in a useful way — there is an open issue on the anthropics/claude-code repo asking for it (#8288 if you want to track). Project-scoped servers from .mcp.json have a separate history of not appearing in the list output (#5963) depending on how you opened the project. If you cannot find a server, check both ~/.claude.json and ./.mcp.json directly.

    The other rough edge is the project-approval prompt. The first time you open a repo with a new .mcp.json, Claude Code asks you to approve each project-scoped server. That is the right security default. It is also infuriating in CI or any non-interactive shell, where the prompt blocks the session. The current workaround is to bake the servers in at user scope on build agents so the project-scope approval never fires in CI. A cleaner non-interactive approval flow is the single most-requested fix I see in real teams.

    The setup I would run on a new machine today

    User-scope: GitHub, a code search server, and a single notes/Notion server. Project-scope in each repo’s .mcp.json: whatever database the project owns and whatever observability backend it reports to. Local-scope: anything experimental I am evaluating but do not want my team or my other repos to inherit.

    Pin --transport http on everything remote. Skip Desktop Extensions (.dxt) for anything you want versioned with the codebase — they are a Claude Desktop convenience, not a Claude Code primitive, and they hide the config from your team. Run claude mcp list when something is off and read .mcp.json directly when list is unhelpful.

    That is the whole working model. The pieces that matter — three scopes, Streamable HTTP, Tool Search — fit on a single screen. The pieces that have not caught up yet — list output, non-interactive approvals — are visible in the issue tracker and will move.

  • We Published Hundreds of Articles About Claude — And Some of Them Were Wrong. Here’s Everything We’re Doing About It.

    I owe you an apology.

    Tygart Media has been publishing about Claude — Anthropic’s AI model — for months. We’ve written about its capabilities, its pricing, its API strings, how to use it, why it matters. We positioned ourselves as a resource for people who want to understand and use Claude intelligently.

    And some of what we published was wrong.

    Not intentionally. Not carelessly in the moment. But wrong in the way that happens when you’re moving fast, publishing at scale, and not building the right systems to catch your own errors. Model version numbers were stale. Pricing figures were outdated. API strings referenced models that had been retired. If you used our content to make a decision about Claude — about which model to use, what to pay, how to call the API — some of that information may have led you in the wrong direction.

    That’s unacceptable to me. And I want to tell you exactly what happened, exactly what I found, and exactly what I’ve built to make sure it never happens again.


    How We Found Out

    It didn’t start with our own discovery. It started with a message.

    Kristin Masteller, the General Manager of Mason County PUD No. 1, reached out on LinkedIn to flag inaccuracies in our local coverage — a different set of articles, but the same underlying problem: we had published with confidence about things we hadn’t verified carefully enough.

    That message hit differently than a normal correction request. Because it made me ask a harder question: if our local coverage had errors, what about our Claude coverage? We had 200+ posts. We were publishing multiple times per day. We had never built a systematic quality check.

    So we ran one.


    The Audit: What We Found

    We wrote a scanner that pulled every post from tygartmedia.com and ran each one through a quality gate checking for four categories of errors:

    • Category A: Stale model names (e.g., “Claude Haiku” with no version number, or references to Claude 3 models as current)
    • Category B: Wrong pricing (e.g., Haiku priced at $0.80/MTok when the actual price is $1.00/MTok)
    • Category C: Deprecated feature claims (features or behaviors that no longer apply)
    • Category D: Cross-site contamination (content from other publication contexts bleeding into Claude coverage)

    Out of 2,333 total posts on the site, 701 touched Claude or AI topics. Of those, 65 posts had violations — 121 individual errors in total.

    We auto-corrected 28 posts immediately — wrong model strings, wrong pricing, outdated API references. 18 posts with more complex issues are still flagged for human review. We are working through them.

    I’m not sharing this to perform humility. I’m sharing it because you deserve to know the scope of the problem, and because the methodology for finding it might be useful to you.


    What We Built to Fix It

    The audit was a one-time fix. What we actually needed was a system — something that would catch these errors before they went live, and keep our model information current automatically.

    Here’s what we built:

    1. The Claude Intelligence Desk

    A dedicated Notion page that serves as the single source of truth for all Claude model information across our entire content operation. It contains the current model truth table — every model name, API string, input/output price, context window, and status — verified against Anthropic’s live documentation.

    The rule is simple: before anyone writes, edits, or publishes any article that mentions Claude, they check this page. If the “Last Verified” timestamp is more than 12 hours old, they run a refresh before proceeding.

    2. The Claude Intelligence Scanner (Automated, Twice Daily)

    A scheduled task that runs at 6 AM and 6 PM Pacific every day. It fetches Anthropic’s models documentation page, compares the current model table to what’s in our Notion desk, and if anything has changed — a new model, a price change, a deprecation — it updates the desk automatically and flags it for human review.

    We will never again be caught publishing outdated Claude information because a model changed and we didn’t notice.

    3. Pre-Publish Quality Gates

    Every new Claude article now runs through the quality gate categories above before it goes live. Wrong model string → blocked. Outdated pricing → blocked. Deprecated claim → flagged.

    4. The Fix Log

    Every correction we make is logged with the post ID, the original wrong content, the correct replacement, and the date. Accountability in writing, not just in words.


    Why I’m Telling You All of This

    Because I think the way most AI content operations work is broken — and I think transparency about that is more useful than pretending we had it figured out.

    The standard playbook for AI content is: write fast, publish often, stay ahead of the news cycle. The problem is that AI — and especially Claude — moves so fast that “write fast” and “stay accurate” are genuinely in tension. Models change. Prices change. Features get added, deprecated, retired. If you’re not building systems to track that, you’re going to drift.

    We drifted. We caught it. We fixed it. And now I want to open up everything we built.

    The Claude Intelligence Desk methodology, the quality gate framework, the scanner architecture — I’m making all of it available. If you’re publishing about Claude, if you’re building automations around Claude, if you’re running a content operation that touches Anthropic’s ecosystem in any way, you can use what we built. Adapt it. Improve it. Tell me what I got wrong in the system design.

    This is not a product. This is not a lead magnet. It’s just the actual work, shared openly, because that’s how we get better together.


    I Want to Build This With You

    Here’s what I’ve learned from this process: the people who catch errors fastest are the people closest to the technology. The developers who are actually calling the API. The builders running Claude in production. The researchers who read every Anthropic paper when it drops. The people in Singapore, India, the UK, Europe, Brazil — every region where Claude is being adopted rapidly and where the local context matters.

    I don’t have all of that knowledge. No single publication does.

    So I’m opening this up.

    If you use Claude seriously — if you’re building with it, writing about it, researching it, deploying it — I want you to write with us.

    What that looks like:

    • Writers and researchers: You bring the knowledge and the perspective. We provide the platform, the distribution, the SEO infrastructure, and editorial support. Your byline, your voice, your expertise.
    • Builders and developers: You’re running Claude in production. You know what actually works, what breaks, what the documentation doesn’t tell you. Write that. The practitioner perspective is the most valuable thing we can publish.
    • International voices: What does Claude adoption look like in Singapore right now? What’s the conversation in India’s developer community? How are European companies thinking about AI compliance alongside Claude? These are stories we cannot tell without you — and they’re stories our audience desperately needs.
    • Correctors: If you read something on this site that’s wrong, tell us. We have a system now. We will fix it, log it, and credit you if you want the credit.

    This is not about content volume. We publish enough already. This is about getting it right — and getting perspectives we genuinely don’t have.


    How to Get Involved

    If any of this resonates — if you want to write, contribute, correct, or just have a conversation about where Claude is going — reach out directly: will@tygartmedia.com

    Tell me where you are, what you’re building or writing or researching, and what you’d want to say if you had a platform to say it. No formal application. No content calendar to fit into. Just a conversation.

    We’re also building out a formal contributor program at tygartmedia.com/contribute/ — trade affiliates, community writers, featured contributors. If that’s more your speed, start there.

    But honestly? Just email me. Let’s figure out what makes sense.


    The work continues. The scanner runs twice a day. The quality gates are live. And if you find something wrong on this site — about Claude, about anything — I genuinely want to know.

    That’s the standard I should have been holding from the beginning. We’re holding it now.

    — Will Tygart
    Tygart Media