Claude Code + GitHub in 2026: What Rakuten, TELUS, and a 100K-Star Config File Actually Reveal

Abstract visualization representing Claude AI marketing automation

Seven hours. That’s how long it took Claude Code to autonomously navigate a 12.5-million-line codebase and implement a production-ready activation vector extraction method in vLLM for Rakuten’s engineering team — a task their developers hadn’t attempted because the codebase was simply too large to reason about at human speed. The result: 99.9% numerical accuracy and a project timeline that compressed from 24 working days to 5.

That’s not a demo. That’s a production case study. And it tells you more about where Claude Code + GitHub workflows are in 2026 than any benchmark comparison.

This post breaks down three real-world patterns from teams getting measurable results with Claude Code on GitHub: what they set up, how they structured the work, and what’s actually driving the outcomes.

The Setup That Enables Everything: CLAUDE.md First

Before any CI/CD integration, the teams getting results share a common starting point: a well-structured CLAUDE.md file that tells Claude Code exactly how to behave in their specific codebase.

Andrej Karpathy’s lean 65-line CLAUDE.md — originally shared as a personal config — accumulated over 100,000 GitHub stars by early 2026, which tells you something: developers are desperately hungry for a working template. What made it valuable wasn’t length. It was specificity. Four behavioral rules that directly address LLM coding failure modes: don’t assume context you don’t have, prefer surgical edits over full rewrites, surface tradeoffs rather than hiding them, and treat goals as declarative targets with verification loops.

That last principle is the most important for GitHub integration. When Claude knows the goal is “this PR should pass CI and not break existing tests” rather than “write code,” the outputs change materially. You get tighter diffs, fewer phantom dependencies, and PRs that actually close the issue they were created for.

Your CLAUDE.md lives in the repo root and commits alongside your code. It travels with the codebase. Claude Code GitHub Actions picks it up automatically when you use anthropics/claude-code-action@v1 — no additional configuration required.

The GitHub Actions Setup

The GA version of Claude Code GitHub Actions (@v1, released in 2026) simplified configuration considerably from the beta. Here’s the minimum viable setup:

name: Claude Code
on:
  issue_comment:
    types: [created]
  pull_request_review_comment:
    types: [created]
jobs:
  claude:
    runs-on: ubuntu-latest
    steps:
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}

Drop this in .github/workflows/claude.yml, install the GitHub app at https://github.com/apps/claude, add your ANTHROPIC_API_KEY secret, and you can start triggering Claude with @claude in any PR or issue comment. The fastest path is running /install-github-app inside your Claude Code terminal session — it walks through the app installation, permissions, and secret setup in a single guided flow.

For teams on Google Vertex AI or Amazon Bedrock — which matters if you’re operating in a regulated environment — the action supports both via Workload Identity Federation. Bedrock uses region-prefixed model strings (us.anthropic.claude-sonnet-4-6); Vertex pulls the project ID from the auth step automatically.

The action defaults to Sonnet. For heavy refactoring tasks on large codebases, bump it explicitly:

claude_args: "--model claude-opus-4-7 --max-turns 10"

claude-opus-4-7 is the current flagship model. For routine PR review and issue triage, Sonnet is faster and more cost-efficient. The --max-turns flag prevents runaway jobs from consuming your Actions budget on open-ended tasks — set it to 5 for review workflows, 10–15 for implementation tasks.

Rakuten: Autonomous Work at Codebase Scale

Rakuten’s engineering team used Claude Code to tackle vLLM — a 12.5-million-line open-source inference framework — without prior familiarity with the codebase. Claude Code ran autonomously for seven hours, implemented the activation vector extraction method, and delivered 99.9% numerical accuracy.

The workflow wasn’t magic. It was structured: a clear task definition scoped to a specific deliverable, a CLAUDE.md establishing Rakuten’s code patterns and testing requirements, and an allowance for autonomous tool use across the codebase. The result wasn’t just the implementation — it was the compression of a project timeline from 24 working days to 5. That’s a 79% reduction in time-to-market for a complex systems task, on a codebase that would take a new engineer weeks just to orient themselves in.

The lesson: Claude Code’s GitHub integration handles scale that would be cognitively impossible for a single developer to navigate in a normal sprint. The constraint isn’t Claude’s ability to read code — it’s whether you’ve given it a goal specific enough to work from.

TELUS: 500,000 Hours at the Portfolio Level

TELUS is a different kind of case. Rather than a single high-stakes task, TELUS rolled Claude Code out across engineering teams organization-wide and measured cumulative impact: 500,000 hours saved, engineering code shipping 30% faster, and over 13,000 custom AI solutions built by their own teams.

The 13,000 solutions number is the most telling. It means that once developers have Claude Code in their GitHub workflow, they stop waiting for platform teams to build internal tooling. They build it themselves — PR automation, internal API clients, test generators, schema migration scripts — because the cost of shipping something useful dropped to a well-scoped conversation with an @claude trigger.

The 30% speed improvement in code shipping translates directly to cycle time. Fewer context switches between writing code and writing tests. Less time waiting for review when PRs arrive with Claude-generated documentation already attached. That number compounds across a large engineering org in ways that individual productivity improvements don’t.

The Pattern Across All Three

Three things appear consistently across every team getting results with Claude Code on GitHub:

A real CLAUDE.md — not a placeholder. A file with codebase-specific rules: what patterns to follow, what to avoid, how tests should be structured, what done looks like. Karpathy’s version works because it encodes failure modes. Yours should encode your team’s standards.

Goal-oriented triggers, not open-ended requests. @claude implement the auth middleware from issue #42 following our existing token validation pattern outperforms @claude help with this. The action inherits your CLAUDE.md automatically, but the trigger needs to state a specific, bounded goal with a clear definition of done.

Autonomous mode for the right task class. Bounded, well-defined tasks — implement this spec, fix this failing test, write a migration for this schema change — run better autonomously than open-ended exploration. Use --max-turns 10 and let it run. Reserve manual review for the output, not the process.

Where to Start

Run /install-github-app in your Claude Code terminal. That one command handles app installation, permission setup, and secret configuration. Add a CLAUDE.md to your repo root — even five lines of real project standards beats a blank file. Open a test issue, write a specific @claude comment with a bounded task, and watch the action run.

Rakuten’s 7-hour autonomous run and TELUS’s 500,000 hours didn’t start with a six-month AI rollout plan. They started with a config file, a workflow YAML, and a task specific enough for Claude to actually finish.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *