Category: Claude Code for Teams

  • Claude Code for Teams: What to Commit, What to .gitignore, and What Actually Survives a Pull Request

    Claude Code for Teams: What to Commit, What to .gitignore, and What Actually Survives a Pull Request

    Most teams I see roll out Claude Code by handing every engineer the install command and walking away. Three weeks later, half the repo has personal preferences committed to .claude/settings.json, the other half has a CLAUDE.md that contradicts the actual review process, and someone’s customized subagent is silently making code changes nobody else on the team understands.

    There is a better way, and it lives in the split between three files: CLAUDE.md, .claude/settings.json, and .claude/settings.local.json. Get this split right, and Claude Code becomes a force multiplier for the team. Get it wrong, and you are shipping AI-generated code that nobody owns.

    The Three-File Split

    Here is the rule, no exceptions:

    CLAUDE.md — committed. Project root. Every engineer’s session reads this at startup. Put your architectural decisions, preferred libraries, naming conventions, and a review checklist here. If you would not write it on a whiteboard for a new hire, it does not belong here.

    .claude/settings.json — committed. Team-wide tool permissions, default models, and hooks. This is the file that keeps personal flagship-model enthusiasts from blowing through your team’s budget when claude-sonnet-4-6 would have done the job. If you let everyone default to claude-opus-4-7 for routine refactors, your monthly invoice will tell you about it.

    .claude/settings.local.json — gitignored. Personal preferences, individual MCP server configs, anything that varies by engineer. Add this line to your .gitignore on day one:

    .claude/settings.local.json

    If you do not, someone will commit credentials by Friday. Audit your existing repo right now: git log --all --full-history -- .claude/settings.local.json will surface any history that needs scrubbing.

    The mistake I see most often is teams committing settings.local.json because someone copied a tutorial that did not make the distinction clear. That copy-paste error is the single most common Claude Code rollout failure I have seen this year.

    Shared Subagents Are the Real Win

    Project subagents live in .claude/agents/ and they ship with the repo. This is where teams compound value. A subagent for security review, one for accessibility audits, one for SQL migration safety — defined once, used by every engineer, every PR.

    A subagent definition is a markdown file with YAML frontmatter and a system prompt. When you commit it, every teammate’s claude invocation can call it. The subagent inherits your CLAUDE.md context automatically, so you do not have to redefine the project’s coding standards inside each agent.

    Here is the trap: do not put twelve subagents in there on day one. Start with one. The team’s most painful repeated review task is the right candidate. Whatever takes a long time and pulls in multiple engineers per PR — that is your first subagent. After two weeks of using it, you will know whether the second one is worth defining.

    CLAUDE.md Is a Living Document, Not a Manifesto

    The longest CLAUDE.md files I see are the worst-performing. Engineers do not read 4,000-word context files, and neither does Claude in any useful way — at some point you are paying for tokens that just dilute the signal.

    The CLAUDE.md files that actually shape behavior are usually compact, structured around three things:

    1. What this codebase is and what it is not.
    2. The handful of rules that get a PR rejected — test coverage, naming, error handling, dependency policy.
    3. A pointer to where deeper documentation lives.

    If your CLAUDE.md has a “philosophy” section, delete it. If it has a “history of the project” section, delete it. The file is read every session — make every line earn its tokens.

    CI/CD: Run Claude Code on PRs, Not in Place of Reviewers

    The pattern that works in CI is automated triage, not automated approval. A GitHub Actions workflow that runs Claude Code on every PR to check for things humans miss — missing tests, secrets in logs, public APIs without docstrings — adds value. A workflow that approves and merges PRs adds liability.

    Anthropic’s official GitHub Actions integration handles the auth and runs Claude Code headlessly. The realistic use cases:

    • Comment on PRs with a structured review (not a merge gate).
    • Auto-label PRs based on the diff.
    • Flag suspected regressions before a human reviewer opens the PR.

    Avoid: anything that auto-merges, anything that posts directly to production-facing systems, anything that calls a paid API on every commit to a feature branch. The bill compounds quickly when CI fires Claude on every push to every developer branch. Gate the workflow on PR-target branches only, or on labels.

    Where Claude Code for Teams Loses Today

    The honest list:

    • No native role-based permissions inside a single repo. If you want a junior engineer’s Claude Code to be more restricted than a senior’s, you have to enforce it through settings.json and trust everyone to not edit it. The Enterprise plan adds SSO, SCIM, and audit logs at the workspace level, but inside the repo, Claude Code itself does not differentiate by role.
    • No first-class secret scanning before commits. Hooks can plug this gap, but you have to wire pre-commit yourself.
    • Shared MCP servers are still per-developer auth. A team-shared Linear or Jira MCP, for example, still requires each engineer to authenticate individually.

    The Team plan addresses workspace-level governance through Premium seats, which is the tier that actually unlocks Claude Code for teammates. The Enterprise plan layers on SSO, SCIM, and audit logs. Neither makes the in-repo configuration questions go away — those are still your team’s problem to solve.

    Model Selection Is a Team Decision

    This one matters more than people realize. Default everyone in .claude/settings.json to claude-sonnet-4-6 for day-to-day work, with claude-opus-4-7 available for explicitly hard tasks. The current Anthropic lineup as of this writing — flagship claude-opus-4-7, workhorse claude-sonnet-4-6, fast claude-haiku-4-5-20251001 — is documented at docs.anthropic.com/en/docs/about-claude/models, and the model strings change frequently enough that hard-coding them in scripts has bitten me twice this year. Read that page, do not memorize it.

    A team that defaults to flagship for everything and a team that defaults to workhorse with selective escalation will see meaningfully different invoices for substantially the same productivity. Make the choice consciously.

    The 20-Minute Setup

    If you are rolling Claude Code out to a team next week:

    1. Add .claude/settings.local.json to .gitignore. First commit, today.
    2. Write a focused CLAUDE.md covering review-blocking rules. Ship it short.
    3. Create one subagent in .claude/agents/ for the team’s most painful review task.
    4. Add a single GitHub Actions workflow that runs Claude Code on PRs in comment-only mode.
    5. Schedule a 30-minute team review of the CLAUDE.md every two weeks. Delete more than you add.

    That is it. Everything else is iteration. The teams that succeed with Claude Code treat the configuration as code — versioned, reviewed, and pruned. The teams that fail treat it as a personal productivity tool that happens to be in a shared repo.

    Decide which kind of team you want to be before the third engineer commits.

  • We Published Hundreds of Articles About Claude — And Some of Them Were Wrong. Here’s Everything We’re Doing About It.

    I owe you an apology.

    Tygart Media has been publishing about Claude — Anthropic’s AI model — for months. We’ve written about its capabilities, its pricing, its API strings, how to use it, why it matters. We positioned ourselves as a resource for people who want to understand and use Claude intelligently.

    And some of what we published was wrong.

    Not intentionally. Not carelessly in the moment. But wrong in the way that happens when you’re moving fast, publishing at scale, and not building the right systems to catch your own errors. Model version numbers were stale. Pricing figures were outdated. API strings referenced models that had been retired. If you used our content to make a decision about Claude — about which model to use, what to pay, how to call the API — some of that information may have led you in the wrong direction.

    That’s unacceptable to me. And I want to tell you exactly what happened, exactly what I found, and exactly what I’ve built to make sure it never happens again.


    How We Found Out

    It didn’t start with our own discovery. It started with a message.

    Kristin Masteller, the General Manager of Mason County PUD No. 1, reached out on LinkedIn to flag inaccuracies in our local coverage — a different set of articles, but the same underlying problem: we had published with confidence about things we hadn’t verified carefully enough.

    That message hit differently than a normal correction request. Because it made me ask a harder question: if our local coverage had errors, what about our Claude coverage? We had 200+ posts. We were publishing multiple times per day. We had never built a systematic quality check.

    So we ran one.


    The Audit: What We Found

    We wrote a scanner that pulled every post from tygartmedia.com and ran each one through a quality gate checking for four categories of errors:

    • Category A: Stale model names (e.g., “Claude Haiku” with no version number, or references to Claude 3 models as current)
    • Category B: Wrong pricing (e.g., Haiku priced at $0.80/MTok when the actual price is $1.00/MTok)
    • Category C: Deprecated feature claims (features or behaviors that no longer apply)
    • Category D: Cross-site contamination (content from other publication contexts bleeding into Claude coverage)

    Out of 2,333 total posts on the site, 701 touched Claude or AI topics. Of those, 65 posts had violations — 121 individual errors in total.

    We auto-corrected 28 posts immediately — wrong model strings, wrong pricing, outdated API references. 18 posts with more complex issues are still flagged for human review. We are working through them.

    I’m not sharing this to perform humility. I’m sharing it because you deserve to know the scope of the problem, and because the methodology for finding it might be useful to you.


    What We Built to Fix It

    The audit was a one-time fix. What we actually needed was a system — something that would catch these errors before they went live, and keep our model information current automatically.

    Here’s what we built:

    1. The Claude Intelligence Desk

    A dedicated Notion page that serves as the single source of truth for all Claude model information across our entire content operation. It contains the current model truth table — every model name, API string, input/output price, context window, and status — verified against Anthropic’s live documentation.

    The rule is simple: before anyone writes, edits, or publishes any article that mentions Claude, they check this page. If the “Last Verified” timestamp is more than 12 hours old, they run a refresh before proceeding.

    2. The Claude Intelligence Scanner (Automated, Twice Daily)

    A scheduled task that runs at 6 AM and 6 PM Pacific every day. It fetches Anthropic’s models documentation page, compares the current model table to what’s in our Notion desk, and if anything has changed — a new model, a price change, a deprecation — it updates the desk automatically and flags it for human review.

    We will never again be caught publishing outdated Claude information because a model changed and we didn’t notice.

    3. Pre-Publish Quality Gates

    Every new Claude article now runs through the quality gate categories above before it goes live. Wrong model string → blocked. Outdated pricing → blocked. Deprecated claim → flagged.

    4. The Fix Log

    Every correction we make is logged with the post ID, the original wrong content, the correct replacement, and the date. Accountability in writing, not just in words.


    Why I’m Telling You All of This

    Because I think the way most AI content operations work is broken — and I think transparency about that is more useful than pretending we had it figured out.

    The standard playbook for AI content is: write fast, publish often, stay ahead of the news cycle. The problem is that AI — and especially Claude — moves so fast that “write fast” and “stay accurate” are genuinely in tension. Models change. Prices change. Features get added, deprecated, retired. If you’re not building systems to track that, you’re going to drift.

    We drifted. We caught it. We fixed it. And now I want to open up everything we built.

    The Claude Intelligence Desk methodology, the quality gate framework, the scanner architecture — I’m making all of it available. If you’re publishing about Claude, if you’re building automations around Claude, if you’re running a content operation that touches Anthropic’s ecosystem in any way, you can use what we built. Adapt it. Improve it. Tell me what I got wrong in the system design.

    This is not a product. This is not a lead magnet. It’s just the actual work, shared openly, because that’s how we get better together.


    I Want to Build This With You

    Here’s what I’ve learned from this process: the people who catch errors fastest are the people closest to the technology. The developers who are actually calling the API. The builders running Claude in production. The researchers who read every Anthropic paper when it drops. The people in Singapore, India, the UK, Europe, Brazil — every region where Claude is being adopted rapidly and where the local context matters.

    I don’t have all of that knowledge. No single publication does.

    So I’m opening this up.

    If you use Claude seriously — if you’re building with it, writing about it, researching it, deploying it — I want you to write with us.

    What that looks like:

    • Writers and researchers: You bring the knowledge and the perspective. We provide the platform, the distribution, the SEO infrastructure, and editorial support. Your byline, your voice, your expertise.
    • Builders and developers: You’re running Claude in production. You know what actually works, what breaks, what the documentation doesn’t tell you. Write that. The practitioner perspective is the most valuable thing we can publish.
    • International voices: What does Claude adoption look like in Singapore right now? What’s the conversation in India’s developer community? How are European companies thinking about AI compliance alongside Claude? These are stories we cannot tell without you — and they’re stories our audience desperately needs.
    • Correctors: If you read something on this site that’s wrong, tell us. We have a system now. We will fix it, log it, and credit you if you want the credit.

    This is not about content volume. We publish enough already. This is about getting it right — and getting perspectives we genuinely don’t have.


    How to Get Involved

    If any of this resonates — if you want to write, contribute, correct, or just have a conversation about where Claude is going — reach out directly: will@tygartmedia.com

    Tell me where you are, what you’re building or writing or researching, and what you’d want to say if you had a platform to say it. No formal application. No content calendar to fit into. Just a conversation.

    We’re also building out a formal contributor program at tygartmedia.com/contribute/ — trade affiliates, community writers, featured contributors. If that’s more your speed, start there.

    But honestly? Just email me. Let’s figure out what makes sense.


    The work continues. The scanner runs twice a day. The quality gates are live. And if you find something wrong on this site — about Claude, about anything — I genuinely want to know.

    That’s the standard I should have been holding from the beginning. We’re holding it now.

    — Will Tygart
    Tygart Media