Category: Anthropic

News, analysis, and profiles covering Anthropic the company and its team.

  • Claude Cowork Changelog: What Changed in Q1 2026

    Claude Cowork Changelog: What Changed in Q1 2026

    Claude AI · Tygart Media · Updated April 2026
    Q1 2026 summary: Cowork went from research preview to generally available. Computer use launched for Pro/Max users. Scheduled and recurring tasks shipped. The sessiondata.img disk-full bug (GitHub #30751) remained open all quarter — the workaround is manual. Plugin marketplace launched in April.

    Claude Cowork shipped more meaningful features in Q1 2026 than in any prior quarter. This is the complete log of what changed, what shipped, and what stayed broken — documented for teams managing Cowork deployments who need to know what actually changed and when.

    January 2026: Foundation Stability

    January was primarily infrastructure hardening. The Cowork runner environment received reliability improvements addressing the most common mid-task failures — streams aborting on slow API responses, sub-agent MCP tool inheritance failures, and session cleanup bugs that left stale working directories. No major feature launches, but the stability improvements reduced the frequency of mid-run failures that had characterized late 2025 Cowork usage.

    Claude Code received the iOS app in October 2025 and the web version — both of which fed into Cowork’s remote dispatch capabilities in Q1. By January, the ability to assign Cowork tasks from a phone was stable enough for regular use.

    February 2026: Model Upgrades Change Everything

    February 5: Claude Opus 4.6 launched. February 17: Claude Sonnet 4.6 launched. Both significantly improved Cowork task quality — particularly for long-horizon agentic sessions where the original 4.0 models would lose coherence mid-task. Sonnet 4.6’s dramatically improved computer use capability (scoring 72.7% on OSWorld) made computer-use Cowork tasks reliable for the first time. Tasks that previously required constant human intervention to stay on track became genuinely autonomous.

    The 1M token context window entered beta on both models in February, enabling Cowork tasks to hold significantly more context across long sessions — particularly valuable for content pipelines processing large document sets or cross-database synthesis tasks in Notion.

    March 2026: Computer Use Reaches Cowork

    March brought the integration of computer use into Cowork for Pro and Max plan users. Claude gained the ability to open files, navigate browsers, click through interfaces, and operate software within Cowork sessions — no additional setup required for Pro/Max subscribers. This was the most significant capability expansion of the quarter: Cowork tasks could now interact with software that doesn’t have an API, including legacy desktop applications and web interfaces without structured data access.

    Dispatch — Cowork’s task queue feature — was extended to support computer use actions, allowing scheduled tasks to include browser automation and desktop interaction steps alongside the existing MCP tool calls and bash operations.

    The Cowork VM disk-full bug (GitHub issue #30751) was acknowledged by Anthropic during March but not resolved. Power users with many skills installed continued to hit the useradd: cannot create directory error every 40-50 sessions. The documented workaround — moving sessiondata.img to reset the VM — remained the only fix. See the full fix guide.

    April 2026: General Availability

    Cowork reached general availability on macOS and Windows via Claude Desktop in April, removing the “research preview” label it had carried since launch. The GA release added enterprise features that had been absent from the preview: usage analytics, OpenTelemetry support for monitoring Cowork activity, and role-based access controls for Enterprise plans allowing admins to define which capabilities each team group can access.

    A plugin marketplace launched for Team and Enterprise plans with admin controls. Admins can now approve, restrict, or block specific plugins org-wide. The Customize section in Claude Desktop was reorganized to group skills, plugins, and connectors in one place.

    Scheduled and recurring task creation was formalized in the UI — previously requiring config file editing, now accessible from within the app. This was the feature most requested by Cowork power users throughout Q1.

    What Remained Broken Through Q1

    The sessiondata.img disk-full bug was the most significant ongoing issue. It affected every power user with a substantial skill library and required periodic manual intervention. No automatic session cleanup shipped in Q1. The manual workaround is documented at Claude Cowork useradd Failed Error Fix.

    Machine-sleep task skipping also remained unresolved — scheduled tasks that fire when a machine is asleep are silently skipped with no retry. Teams running reliable scheduled automation continued to need an always-on machine or a cloud-side solution.

    Q2 2026 Outlook

    The disk-full bug fix and automatic session cleanup are the most anticipated Q2 items. Agent teams (available on Max plans) are expected to expand with better orchestration tooling. Claude 5, expected Q2-Q3, will bring model quality improvements that should further improve long-horizon Cowork task reliability.

    When did Claude Cowork become generally available?

    Claude Cowork reached general availability on macOS and Windows in April 2026. It had been in research preview since its initial launch in late 2025.

    What was the biggest Cowork improvement in Q1 2026?

    The February launch of Claude Sonnet 4.6 and Opus 4.6 most improved Cowork task quality — especially computer use tasks, which became reliably autonomous with Sonnet 4.6’s improved OSWorld scores. March brought computer use to Cowork for Pro/Max users directly.

    Was the Cowork disk-full bug fixed in Q1 2026?

    No. GitHub issue #30751 (sessiondata.img filling up) remained open through Q1 2026. The manual workaround — moving sessiondata.img to reset the VM — is the only fix as of April 2026.

    Related: How Claude Cowork Can Actually Train Your Staff to Think Better — a 7-part series on using Cowork as a training tool across industries.


  • WordPress REST API for Publishers: How to Connect Claude to WordPress Without Plugins

    WordPress REST API for Publishers: How to Connect Claude to WordPress Without Plugins

    Claude AI · Tygart Media
    What this enables: Publishing articles to WordPress programmatically from Claude, Python scripts, GCP Cloud Run jobs, or any HTTP client — without plugins, without Elementor, without touching the WP admin. The same pipeline that powers 27+ managed sites publishing thousands of articles per month.

    WordPress has a fully functional REST API built in. Most people never use it because they don’t know it’s there. For publishers, content operations teams, and anyone running Claude-powered content workflows, the REST API is the infrastructure that eliminates manual publishing and enables automation at scale. Here’s how it works and how to wire Claude to it.

    What the WordPress REST API Can Do

    The REST API exposes every major WordPress function over HTTP: create posts, update posts, get posts, manage categories and tags, upload media, manage users. Every action you can take in the WordPress admin can be done via API call. No plugin required — it’s built into WordPress core since version 4.7.

    Authentication: Application Passwords

    The simplest authentication method for Claude-to-WordPress connections is WordPress Application Passwords — a built-in feature (WordPress 5.6+) that generates a dedicated password for API access without exposing your main login credentials.

    To generate one: WP Admin → Users → Your Profile → Application Passwords → enter a name → click Add New. Copy the generated password immediately — it’s only shown once. The format it gives you has spaces; remove them before using in API calls.

    Authenticate using HTTP Basic Auth:

    Authorization: Basic base64(username:app_password)

    Publishing a Post via API

    A complete post publish call:

    POST https://yoursite.com/wp-json/wp/v2/posts
    Authorization: Basic [base64 credentials]
    Content-Type: application/json

    {
      "title": "Your Post Title",
      "content": "<p>Full HTML content here</p>",
      "excerpt": "Your SEO meta description (140-160 chars)",
      "status": "publish",
      "categories": [5, 12],
      "tags": [34, 67, 89],
      "slug": "your-post-slug"
    }

    The response returns the new post ID and URL. Log these — you need the post ID for any subsequent updates.

    Wiring Claude Into the Pipeline

    The standard Claude-to-WordPress pipeline: Claude generates the article content (with SEO optimization, schema markup, and FAQ sections baked in), a Python or Node.js script assembles the API payload, the payload POSTs to the WordPress REST endpoint, and the response confirms publication. For Cowork tasks, this runs on a schedule without human intervention.

    The critical rule: Notion first, WordPress second. Every article goes to a Notion page before publishing to WordPress. Notion is the storage and version control layer; WordPress is the distribution layer. If you ever need to republish, update, or audit, you have a source of truth that isn’t locked inside the WordPress database.

    Handling WAF Blocks

    Many managed WordPress hosts (WP Engine, SiteGround) run Web Application Firewalls that block API calls from cloud IP addresses. Symptoms: 403 Forbidden errors on POST requests, even with correct credentials. Two solutions: route API calls through a Cloud Run proxy service that presents a different IP profile, or whitelist your specific GCP IP range in the hosting provider’s WAF settings. For SiteGround specifically, direct whitelisting is the most reliable path — the proxy approach has mixed results due to SiteGround’s aggressive WAF configuration.

    Schema and SEO Metadata

    The WordPress REST API supports all Yoast SEO and Rank Math meta fields as post meta. To set SEO title, meta description, and schema markup programmatically, include the relevant meta fields in your POST payload. For Yoast: _yoast_wpseo_title and _yoast_wpseo_metadesc. For Rank Math: rank_math_title and rank_math_description. Inject JSON-LD schema directly into the post content as a <script type="application/ld+json"> block — it renders correctly on the front end and passes Google’s rich results validator.

    How do I publish to WordPress without logging in?

    Use the WordPress REST API with Application Password authentication. Generate an application password in WP Admin → Users → Your Profile, then POST to /wp-json/wp/v2/posts with Basic Auth credentials. No plugin required — the REST API is built into WordPress core.

    Can Claude publish directly to WordPress?

    Yes — through the WordPress REST API. Claude generates content, a script assembles the API payload, and the POST call publishes it. This is how automated content pipelines work at scale. Always write to Notion first; WordPress is the distribution layer.

    Why is my WordPress REST API returning 403?

    Most likely a WAF (Web Application Firewall) blocking the request — common on WP Engine and SiteGround. Either route API calls through a proxy service with a whitelisted IP or whitelist your specific IP range in the hosting provider’s firewall settings.

  • Claude on GCP: Billing, IAM, and Quota Setup for Teams

    Claude on GCP: Billing, IAM, and Quota Setup for Teams

    Claude AI · Tygart Media
    The three things teams get wrong: Using a shared GCP project for Claude and other workloads (makes cost attribution impossible), not requesting quota increases before launch (causes 429 errors at the worst time), and using overly broad IAM roles (security risk and audit problem). All three are fixable in an afternoon.

    Running Claude through Vertex AI on GCP is straightforward to set up for a solo developer. For a team deploying Claude in production, three infrastructure decisions matter significantly: project structure for billing, IAM configuration for access control, and quota management to avoid rate-limit failures. Here’s the setup that scales cleanly.

    Project Structure: One Project for Claude

    Create a dedicated GCP project for Claude workloads — separate from your main application project, your data pipeline project, and your development sandbox. This separation is the single most important decision for operational clarity. With a dedicated project you get: Claude API costs isolated on their own billing line, IAM permissions that only affect Claude access (not your entire infrastructure), quota limits and alerts scoped to Claude usage, and audit logs that only contain Claude-related activity.

    Naming convention: company-claude-prod for production, company-claude-dev for development. Keep them separate — dev workloads shouldn’t share quotas with production.

    IAM Configuration: Minimum Necessary Permissions

    The role that grants Claude API access through Vertex AI is roles/aiplatform.user. That’s the only role needed for model invocation and token counting. Don’t assign broader roles like roles/aiplatform.admin or roles/editor to service accounts that only need to call Claude.

    For team deployments, create one service account per application or environment — not one shared service account for everything. Example structure:

    Service Account Role Used By
    claude-prod-api@project.iam.gserviceaccount.com aiplatform.user Production app
    claude-dev-api@project.iam.gserviceaccount.com aiplatform.user Development
    claude-cowork@project.iam.gserviceaccount.com aiplatform.user Claude Code / Cowork

    If a service account is compromised, you rotate one key without affecting other applications. If a developer leaves, you disable their specific account without touching production credentials.

    Quota Management: Request Increases Before You Need Them

    Vertex AI Claude quotas are set conservatively by default. The default quota for most regions is enough for development and testing, but production workloads — especially automated pipelines running multiple requests per minute — will hit limits. The 429 error (Resource exhausted) at peak load is one of the most common production failure modes.

    Request quota increases before launch, not during an incident. Go to Cloud Console → IAM & Admin → Quotas, filter by “anthropic,” and request increases for the Claude models you’re deploying. Approval is typically same-day for standard business accounts. For the global endpoint, a good starting quota for a production team is 60 requests per minute for Sonnet 4.6 and 20 requests per minute for Opus 4.6.

    Budget Alerts: Know Before It’s a Problem

    Set a budget alert on your Claude GCP project before anything runs in production. Go to Billing → Budgets & Alerts, create a budget for the project, and set email alerts at 50%, 80%, and 100% of your expected monthly spend. Add a Pub/Sub notification if you want to automatically throttle or pause workloads when budget thresholds are hit.

    A Claude content pipeline running at unexpected volume can burn through budget quickly — especially with Opus 4.6 at $25/million output tokens. Budget alerts are the safety net that turns a potential billing surprise into a manageable alert.

    Cloud Logging: Keep the Audit Trail

    Vertex AI API calls are logged to Cloud Logging by default. For regulated industries, explicitly configure log retention to match your compliance requirements — the default 30-day retention may not be sufficient. For SOC 2 or HIPAA environments, export logs to Cloud Storage for long-term archival. The log entries include model called, project, timestamp, and token counts — enough for a complete audit trail without exposing prompt content.

    How do I set up billing for Claude on GCP?

    Create a dedicated GCP project for Claude workloads, set a budget alert before anything runs in production, and monitor spend at Billing → Budgets. Keeping Claude in its own project makes cost attribution clean and prevents unexpected spend from affecting other project budgets.

    What IAM role does Claude need on Vertex AI?

    The roles/aiplatform.user role is sufficient for model invocation and token counting. Use one service account per application or environment. Never assign broader roles like editor or aiplatform.admin to service accounts that only need to call Claude.

    How do I fix Claude 429 quota errors on Vertex AI?

    Go to Cloud Console → IAM & Admin → Quotas, filter by “anthropic,” and request a quota increase for the specific Claude model hitting limits. Request increases before production launch, not during an incident. Approvals are typically same-day for standard business accounts.

  • Claude Cowork MCP Setup: Connecting Notion, Gmail, and Google Drive

    Claude Cowork MCP Setup: Connecting Notion, Gmail, and Google Drive

    Claude AI · Tygart Media
    What this connects: Notion, Gmail, Google Calendar, Google Drive — the four MCP servers most useful for Cowork tasks. Each connects through claude_desktop_config.json and authenticates once. After setup, Cowork tasks can read and write to these services automatically.

    Claude Cowork’s value multiplies significantly when it’s connected to the services where your work actually lives. A Cowork task with no MCP connections can only work with files on your local machine. A task connected to Notion, Gmail, and Google Calendar can read your priorities, check your schedule, triage your inbox, and write outputs back to your workspace — automatically. Here’s how to wire the connections.

    Where MCP Configuration Lives

    All MCP servers are configured in a single file: claude_desktop_config.json. On Windows, this is at %APPDATA%\Claude\claude_desktop_config.json. On macOS, it’s at ~/Library/Application Support/Claude/claude_desktop_config.json. Open it in any text editor. If it doesn’t exist yet, create it. Claude Desktop reads this file at launch — any changes require a restart.

    Connecting Notion

    Notion MCP gives Cowork tasks read and write access to your Notion workspace — fetch pages, create pages, query databases, and update records.

    Add to your claude_desktop_config.json:

    "mcpServers": {
      "notion": {
        "command": "npx",
        "args": ["-y", "@notionhq/notion-mcp-server"],
        "env": {"OPENAPI_MCP_HEADERS": "{"Authorization": "Bearer YOUR_NOTION_TOKEN", "Notion-Version": "2022-06-28"}"}
      }
    }

    Get your Notion API token from notion.so/my-integrations. Create an internal integration, copy the token, and add it to the config. Then share each Notion database or page you want Claude to access with that integration — Notion doesn’t give blanket workspace access, you grant it page by page.

    Connecting Gmail

    Gmail MCP lets Cowork tasks search threads, read emails, and create drafts. Setup requires a Google Cloud project with the Gmail API enabled and OAuth credentials configured.

    "gmail": {
      "command": "npx",
      "args": ["-y", "@googleapis/gmail-mcp"],
      "env": {"GMAIL_CREDENTIALS_PATH": "/path/to/credentials.json"}
    }

    First-run requires completing OAuth in a browser window. After that, the token refreshes automatically. Gmail MCP is read-heavy in most Cowork workflows — used primarily for triage and summary, not bulk sending.

    Connecting Google Calendar

    Calendar MCP provides today’s events, upcoming meetings, and schedule context for briefing and planning tasks.

    "google-calendar": {
      "command": "npx",
      "args": ["-y", "@googleapis/calendar-mcp"],
      "env": {"GOOGLE_CREDENTIALS_PATH": "/path/to/credentials.json"}
    }

    If you’ve already set up Gmail MCP with Google OAuth credentials, Calendar MCP can reuse the same credentials file.

    Verifying Your Connections

    After updating the config and restarting Claude Desktop, open a new chat and ask: “What MCP servers do you have access to?” Claude will list the active connections. If a connection doesn’t appear, check the config file for JSON syntax errors — a single missing comma or bracket breaks the entire config. Use a JSON validator before restarting.

    For Cowork specifically: start a task session and ask Claude to fetch a specific Notion page or list today’s calendar events. A successful response confirms the MCP connection is working for scheduled tasks, not just interactive chat.

    Common Issues

    MCP server not showing up: JSON syntax error in config, or the npx package failed to install. Run the npx command manually in a terminal to check for errors.

    Notion pages returning empty: The integration hasn’t been granted access to that specific page. Go to the page in Notion, click the three-dot menu, and share it with your integration.

    Gmail authentication loop: The OAuth token expired or the credentials file path is wrong. Delete the token file and re-authenticate.

    How do I connect Notion to Claude Cowork?

    Add the Notion MCP server to claude_desktop_config.json with your Notion API token, restart Claude Desktop, and share the specific pages or databases you want Claude to access with your Notion integration.

    Can Claude Cowork read my Gmail?

    Yes with Gmail MCP configured. It requires a Google Cloud project with Gmail API enabled and OAuth credentials. Once set up, Cowork tasks can search, read, and draft emails in Gmail.

    Related: How Claude Cowork Can Actually Train Your Staff to Think Better — a 7-part series on using Cowork as a training tool across industries.

  • How to Build a Daily Briefing With Claude Cowork

    How to Build a Daily Briefing With Claude Cowork

    Claude AI · Tygart Media
    What this builds: A Cowork task that runs each morning, pulls context from Notion, checks your calendar and email, and delivers a structured daily briefing — without you opening anything. Estimated setup time: 90 minutes. Daily time saved: 20-30 minutes of morning context-gathering.

    One of the most practical Cowork automation setups is a daily briefing task — a scheduled agent run that assembles your morning context before you start work. Here’s exactly how to build it.

    What the Briefing Covers

    A well-designed daily briefing task pulls from 3-5 sources and returns a single structured summary. Typical sections: today’s calendar events (from Google Calendar MCP), open priority tasks (from Notion MCP), any overnight emails that need a response (from Gmail MCP), one or two metrics worth knowing (from whatever dashboard you track), and a suggested priority order for the day. The whole thing arrives as a Notion page or appears in a Cowork run log by the time you open your laptop.

    Step 1: Set Up Your MCP Connections

    The briefing task needs read access to the services it pulls from. In Claude Desktop settings, confirm you have active MCP connections for the services you want to include. At minimum: Notion (for tasks and project status) and Google Calendar (for today’s schedule). Gmail is optional but adds significant value if you get time-sensitive emails. Configure these in claude_desktop_config.json before building the task.

    Step 2: Write the Task Prompt

    The prompt is the core of the task. It needs to be specific about what to pull, how to structure the output, and where to write it. A working prompt structure:

    Daily Briefing Prompt Template:

    You are producing my daily morning briefing. Run these steps in order:

    1. Check my Google Calendar for today’s events. List all events with time, title, and any location or meeting link.
    2. Open my Notion [Priority Tasks database] and list any tasks marked P0 or P1 that are not yet complete.
    3. Check Gmail for any unread emails received in the last 12 hours that appear to need a response. List sender, subject, and one-sentence summary.
    4. Write the compiled briefing to a new Notion page titled “Daily Briefing — [today’s date]” under [your briefing parent page].

    Format the briefing with clear sections: Calendar, Priority Tasks, Email Review, Suggested First Action. Keep it scannable — bullet points, not paragraphs.

    Step 3: Create and Schedule the Task

    In Claude Desktop, open Cowork and create a new task. Paste your prompt. Set the schedule to daily at a time before you start work — 6:00 AM or 7:00 AM typically. Make sure Claude Desktop is configured to launch at startup on your machine so it’s running when the task fires. If your machine is off or sleeping when the task fires, it will be skipped — there’s no catch-up mechanism.

    Step 4: Test It Manually First

    Before relying on the scheduled run, trigger the task manually once. Verify it’s pulling from the right Notion database, writing to the correct parent page, and that the calendar and email integrations are connecting. Most first-run failures are MCP authentication issues — the MCP server needs to be authenticated with each service before the task can use it.

    Iteration: Making It Better Over Time

    The first briefing will be useful but imperfect. After a week of runs, refine the prompt based on what’s missing or what’s noise. Common refinements: add a “what’s overdue” check from Notion, filter email to only flag certain senders or subjects, add a weather check for field-based work, or include a one-line summary of the prior day’s Cowork run logs. Each iteration takes 5 minutes to update the prompt; the task runs better every week.

    Can Claude Cowork send me a daily briefing automatically?

    Yes — you build a Cowork task with the briefing prompt, connect it to your MCP sources (Notion, Google Calendar, Gmail), and schedule it to run each morning. The briefing appears in Notion before you start work. Claude Desktop must be running and your machine must be awake at the scheduled time.

    What MCP connections does a daily briefing task need?

    Minimum: Notion (for tasks) and Google Calendar (for schedule). Optional but valuable: Gmail (for overnight emails). All must be configured in claude_desktop_config.json and authenticated before the task can use them.

    Related: How Claude Cowork Can Actually Train Your Staff to Think Better — a 7-part series on using Cowork as a training tool across industries.

  • Claude vs Notion AI: Inside the Database vs Outside — What the Tests Actually Show

    Claude vs Notion AI: Inside the Database vs Outside — What the Tests Actually Show

    Claude AI · Tygart Media · Tested March 2026
    The key distinction: Notion AI (with Claude Sonnet or Opus inside) has native semantic access to your entire workspace — it traverses database relationships, reads inline comments, and synthesizes across pages it was never explicitly pointed at. Claude connected via API has to be told exactly where to look. Same model, fundamentally different information access.

    There are now two ways to run Claude inside Notion: through Notion AI (where Anthropic’s models power Notion’s built-in AI features with workspace search enabled), and through direct Claude integration (where your Claude instance connects to Notion via the API or MCP). Most people assume these are equivalent — same Claude model, same output. They are not. The difference isn’t the model. It’s the context layer underneath it.

    What “Inside the Database” Actually Means

    When you use Notion AI with workspace search enabled, Claude (or another model) is operating with native Notion context. It can traverse relational links between databases the way a human would navigate a workspace — following a CRM record to its linked action items, pulling content pipeline data alongside revenue records, reading the inline comment threads that live on specific blocks. It doesn’t just retrieve documents; it understands the relationships between documents.

    When you connect Claude to Notion via the API, Claude receives whatever data you explicitly fetch and pass to it. It reads exactly what you give it, nothing more. A cross-database synthesis requires you to make multiple API calls, stitch the data together, and pass the combined result. You are the relationship layer; Claude is the reasoning layer on top of your assembly work.

    Real Test Results: The Same Task, Both Ways

    We ran a structured test in March 2026 — asking multiple AI models inside Notion AI (with workspace search) to produce a complete client health summary across four databases simultaneously: Master CRM, WordPress Site Operations, Content Pipeline, and Revenue Pipeline. Then comparing what Claude via API alone could produce on the same client.

    The result was not close on the first run. Notion AI with Claude Sonnet 4.6 took approximately 35 seconds and returned:

    • Revenue Pipeline data ($2,000/month Closed Won)
    • CRM contact details with email and phone
    • WordPress ops: Health Score, post count, connection method, specific IPs
    • A cumulative content table (Pre-2026: 30, Jan: 529, Feb: 375, Mar: 164 = 1,098 total)
    • SEO performance comparison: Clicks +2,217%, SEO Value +3,028%, Keywords +271% (Dec 2025 vs Feb 2026)
    • 7 prioritized attention items with a strategic bottom-line summary

    Claude Opus 4.6 inside Notion earned what we graded S — executive intelligence tier. It opened with a strategic framing (“Overall Health: Needs Attention”), named all Notion sources it queried, built a full P0-P3 priority matrix with rationale, and surfaced findings none of the other models caught: a hardcoded phone number as the root cause of attribution gap, a missing contact form on the /contact-us/ page, and the exact date of each optimization action in the content workflow.

    The single finding that made the difference: Opus 4.6 inside Notion connected a 403 error from an SEO drift detector to a specific operational blind spot — and traced it back to a configuration issue that had been invisible because it required reading both a monitoring log and an infrastructure record simultaneously. Claude via API would have needed those two documents explicitly fetched and merged before it could reason across them.

    What Claude Inside Notion Can Do That External Claude Cannot

    Capability Notion AI (Claude inside) Claude via API/MCP
    Semantic traversal across linked databases ✅ Native ❌ Manual fetch required
    Read inline comments and discussion threads ✅ Yes ❌ Not via standard API
    Cross-reference dashboard data with page content ✅ Automatic ❌ Requires explicit assembly
    Follow relational links without being told to ✅ Yes ❌ Must specify each fetch
    Identify discrepancies between related records ✅ Can catch stale data ⚠ Only if you provide both records
    Access workspace search across all pages ✅ Full semantic search ⚠ API search is keyword-based
    Run without human assembly of context ✅ Yes ❌ Requires orchestration layer

    What External Claude Does Better

    The inside-the-database advantage is real, but it’s not the whole story. Claude connected externally through the API or MCP has capabilities Notion AI cannot replicate:

    Taking actions. Notion AI can read and summarize. External Claude can read, reason, and then act — publish a WordPress post, update a Metricool schedule, send an email, write a file to GCP. Notion AI is fundamentally a read and summarize layer. External Claude connected to tools is an execution layer.

    Custom system prompts and instructions. External Claude sessions can be loaded with specific operational context, role definitions, and multi-step task chains. Notion AI’s model selection is relatively fixed — you pick the model, but you can’t deeply configure its behavior the way you can with a direct API call.

    Model routing and cost control. External Claude lets you route specific tasks to specific model tiers — Haiku for bulk classification, Sonnet for standard work, Opus for strategic synthesis. Notion AI doesn’t expose that level of routing control to the user.

    Automation and scheduling. External Claude runs in Cowork tasks, Cloud Run cron jobs, and triggered pipelines. Notion AI runs when a human opens a page and asks a question.

    The Architecture That Gets the Most From Both

    The most powerful setup is not a choice between them — it’s using both for what each does best. Notion AI with workspace search is the intelligence layer: the “eyes” that can synthesize across your entire knowledge base and surface what matters. External Claude is the execution layer: the “hands” that take action based on what the intelligence layer surfaces.

    Practically: run a Notion AI query with Opus 4.6 to get the full client health picture and identify the top 3 priorities. Then hand those priorities to external Claude (via Cowork or a direct API call) to execute: draft the emails, update the records, publish the content. The separation of concerns — Notion AI for global workspace intelligence, external Claude for structured action — is more powerful than either alone.

    One concrete implementation: a daily Cowork task that first calls the Notion MCP to fetch key database records, then passes that assembled context to Claude for action planning, then executes a task list. The fetch step approximates what Notion AI does natively, but you control exactly what gets assembled. For well-defined, repeating workflows, this is often sufficient. For exploratory synthesis (“give me the full picture across this client’s history”) where you don’t know in advance what’s relevant, Notion AI’s native traversal is materially better.

    Model Performance Inside Notion AI (March 2026 Test)

    Model Grade Speed Best For
    Claude Opus 4.6 S ~60s Executive summaries, strategic framing, P0-P3 priority matrices. Found unique issues no other model caught.
    Claude Sonnet 4.6 A+ ~35s Operational detail, SEO metrics, granular data presentation. Best for recurring ops reports.
    GPT-5.2 A+ ~90s Deepest data mining. Named individuals, deadlines, specific IDs. Slowest but most thorough.
    Gemini 3.1 Pro A ~25s Fastest response. Strong all-rounder. Best for quick status checks.
    GPT-5.4 A ~40s Clean structured output. Good first-pass default for routine checks.

    The multi-model finding: no single model caught everything. Running the same query through three models and distilling their unique findings produced materially better intelligence than any single model alone. Opus 4.6 found the hardcoded phone number and missing contact form. GPT-5.2 found the CRM coverage gap and named specific people with deadlines. Sonnet 4.6 built the clearest data tables. Together: a complete operational picture.

    Is Notion AI the same as using Claude directly?

    No. Both can use Claude models, but Notion AI with workspace search has native semantic access to your entire Notion workspace — it traverses linked databases and reads relationships automatically. External Claude via API only sees data you explicitly fetch and pass to it. Same model, different context layer.

    Which is better: Claude inside Notion or Claude connected via API?

    Depends on the task. Notion AI (Claude inside) is better for cross-database synthesis and global workspace intelligence — it can see everything without you assembling it. External Claude is better for taking action — publishing, updating, scheduling, automating. The most powerful setup uses both: Notion AI for intelligence, external Claude for execution.

    Can Claude via API replace Notion AI?

    Partially. The Notion MCP lets external Claude fetch database records, but it still requires you to specify what to fetch. Notion AI’s native traversal follows relationships automatically without explicit instruction. For exploratory synthesis across an unknown-in-advance data landscape, Notion AI’s native context is materially better than assembled API context.


  • Running Claude Inside a GCP VM: The Fortress Architecture Explained

    Running Claude Inside a GCP VM: The Fortress Architecture Explained

    Claude AI · Tygart Media
    What this architecture solves: Claude API calls made from inside a private GCP VPC never touch the public internet. Your data, prompts, and outputs stay within your cloud perimeter. This is the standard for regulated industries and the right model for any organization where data sovereignty matters.

    Most Claude API usage works the same way: your application makes a call to api.anthropic.com across the public internet. For consumer apps and developer projects, that’s fine. For enterprises handling sensitive data — healthcare, finance, legal, government — “fine” isn’t the bar. The Fortress Architecture runs Claude inference through Google Cloud’s Vertex AI from inside a private VPC, so sensitive data never crosses a public network boundary.

    The Core Architecture

    Instead of calling the Anthropic API directly, your application calls Claude through Vertex AI from within a GCP Compute Engine VM or Cloud Run service inside your VPC. VPC Service Controls create a security perimeter around your Vertex AI resource. Requests to Claude stay inside that perimeter — they originate from your private network, route through Google’s internal infrastructure to Vertex AI, and return inside the same boundary.

    From a data flow perspective: your application → private VPC → Vertex AI API (Google internal) → Claude model inference → back through VPC → your application. No public internet hop at any point.

    Why a VM Instead of a Direct API Call

    Running Claude through a VM — rather than a developer’s laptop or a serverless function with public internet access — gives you several properties that matter at enterprise scale:

    Consistent identity. All Claude calls originate from a known service account with specific IAM permissions. There’s no risk of a developer accidentally using personal credentials or exposing an API key.

    Network isolation. The VM sits inside a VPC with firewall rules. You control exactly what it can reach and what can reach it. No lateral movement from a compromised endpoint reaches your Claude integration.

    Audit trail. Every Claude API call through Vertex AI generates Cloud Logging entries. You get a complete, immutable record of what was asked and when — essential for compliance in healthcare and financial services.

    Centralized cost control. All AI spend flows through one GCP project with budget alerts and quotas. No shadow AI spending from individual developers using personal API keys.

    Implementation Pattern

    The standard setup: a Cloud Run service or Compute Engine VM runs your Claude-connected application code inside a VPC. A service account with roles/aiplatform.user is the only identity that can call Vertex AI. VPC Service Controls restrict Vertex AI access to requests originating from your perimeter. Cloud Logging captures all API activity. Budget alerts on the GCP project catch unexpected usage spikes.

    The application code itself is straightforward — the Anthropic Python or Node.js SDK with the Vertex AI configuration flag set. The security comes from the infrastructure layer, not the application layer.

    When This Architecture Is Worth the Setup

    For a solo developer or small startup, this is overkill. The setup overhead — VPC configuration, service accounts, VPC Service Controls, Cloud Logging — is a full day of infrastructure work. For organizations where a data breach involving patient records, financial data, or privileged legal communications would be catastrophic, that day of setup is a trivial cost against the risk.

    The categories where this architecture is essentially required: HIPAA-covered healthcare applications, financial services with SOC 2 or PCI requirements, legal services handling privileged communications, government contractors, and any application processing PII at scale.

    The Real Operational Benefit Beyond Security

    The compliance story is obvious. The less-discussed benefit is operational consistency. When all Claude usage flows through a single controlled channel, you get uniform behavior (same model version, same parameters, same rate limits), centralized prompt management (update the system prompt in one place, not in every developer’s local config), and predictable costs. The Fortress Architecture is as much an operational discipline as it is a security model. See The Fortress Architecture: Full Guide for the complete technical breakdown and Claude on Vertex AI: Why Route Through GCP for the Vertex AI setup.

    Can you run Claude inside a private GCP VPC?

    Yes — through Vertex AI with VPC Service Controls. Claude requests originate inside your private network perimeter and never cross the public internet. This is the standard architecture for regulated industry deployments.

    Is Claude HIPAA compliant on GCP?

    Vertex AI is available under Google Cloud’s HIPAA BAA. Running Claude through Vertex AI inside a VPC with appropriate controls can support HIPAA-compliant architectures. Consult your compliance team on the full requirements for your specific application.

    Why run Claude on a GCP VM instead of calling the API directly?

    A VM inside a VPC gives you network isolation, a consistent service account identity, complete audit logging, centralized cost control, and the ability to apply VPC Service Controls. For enterprise deployments, this is the correct architecture — not a development shortcut.

  • Claude Release History: Every Model From Claude 1 to Claude 4.6

    Claude Release History: Every Model From Claude 1 to Claude 4.6

    Claude AI · Tygart Media · Last Updated April 2026
    Current models (April 2026): Claude Opus 4.6 and Claude Sonnet 4.6 — released February 2026. Claude Haiku 4.5 — October 2025. Original Claude 4.0 models deprecated, retiring June 15, 2026.

    Anthropic has released over a dozen Claude models since the first public launch in March 2023. This page is the complete record — every model, its release date, the key capability it introduced, and its current status. It’s updated when Anthropic ships new releases.

    The Complete Claude Model Timeline

    Model Released Key Capability Status
    Claude 1 March 2023 First public release. Constitutional AI, 100K context. Retired
    Claude 1.3 July 2023 Improved reasoning and code generation. Retired
    Claude 2 July 2023 Doubled context to 100K, stronger coding and analysis. Retired
    Claude 2.1 November 2023 Reduced hallucination rate, tool use support added. Retired
    Claude 3 Haiku March 2024 Fastest, cheapest Claude 3 tier. Near-instant responses. Deprecated
    Claude 3 Sonnet March 2024 Balanced performance/cost. First strong coding model. Deprecated
    Claude 3 Opus March 2024 Top benchmark scores at launch. Best reasoning of the generation. Deprecated
    Claude 3.5 Sonnet June 2024 Outperformed prior Opus on most benchmarks at Sonnet price. Landmark release. Deprecated
    Claude 3.5 Haiku October 2024 Speed/cost tier for Claude 3.5 generation. Deprecated
    Claude 3.5 Sonnet v2 October 2024 Computer use capability introduced. Improved coding. Deprecated
    Claude 3.7 Sonnet February 2025 Extended thinking. First Claude with explicit chain-of-thought reasoning. Deprecated
    Claude Sonnet 4 May 2025 Claude 4 generation launch. Major coding gains, SWE-bench leadership. ⚠ Retiring June 15, 2026
    Claude Opus 4 May 2025 Maximum capability in Claude 4 generation at launch. ⚠ Retiring June 15, 2026
    Claude Haiku 4.5 October 2025 Speed/cost tier for 4.x generation. 200K context. ✅ Current
    Claude Opus 4.6 February 5, 2026 1M token context window (beta then GA). Improved long-horizon reasoning. ✅ Current flagship
    Claude Sonnet 4.6 February 17, 2026 Near-Opus performance. 1M token context. Dramatically improved computer use. ✅ Current default

    The Generational Leaps That Mattered Most

    Claude 3.5 Sonnet (June 2024) — The Benchmark Flip

    This was the release that established Claude as a serious competitor to GPT-4. Claude 3.5 Sonnet outperformed Claude 3 Opus on most benchmarks at half the cost — the first time a Sonnet-tier model beat the prior generation’s flagship. It also introduced Artifacts, the interactive output canvas that became a defining Claude feature. Every generation since has followed this pattern: new Sonnet outperforms prior Opus.

    Tracking Claude Releases?

    I’ll email you when something meaningful ships — no noise, no newsletter cadence.

    Just a personal note when Anthropic releases something that actually changes how you should be working.

    Email Will → will@tygartmedia.com

    Claude 3.7 Sonnet (February 2025) — Extended Thinking

    Extended thinking gave Claude an explicit reasoning layer before responding — the model could work through a problem step-by-step before committing to an answer. This was Anthropic’s answer to OpenAI’s o1 and marked the beginning of “reasoning models” as a mainstream concept in Claude’s lineup.

    Claude Sonnet 4 (May 2025) — Coding Leadership

    The Claude 4 launch pushed Claude to the top of SWE-bench Verified, the real-world software engineering benchmark that matters most to developers. Claude Code launched alongside it and reached $1B in annualized revenue by November 2025 — one of the fastest-growing developer tools in history.

    Claude Sonnet 4.6 (February 2026) — Computer Use at Scale

    The 4.6 generation’s most significant practical advance was dramatically improved computer use — Claude’s ability to navigate browsers, fill forms, click through interfaces, and operate software autonomously. Combined with the 1M token context window reaching general availability, this made Claude genuinely useful for long-horizon agentic tasks that previously required constant human intervention.

    What Comes Next

    Claude 5 is expected Q2-Q3 2026. No official announcement as of April 2026. The pattern suggests Claude 5 Sonnet will outperform current Opus 4.6 at lower cost — consistent with every prior generation transition. See Claude 5 Release Date: What We Know.

    For current API strings and deprecation deadlines, see the Current Claude Model Version Tracker.

    When was Claude first released?

    Claude 1 launched publicly in March 2023. Anthropic was founded in 2021 by former OpenAI researchers, and Claude was in limited testing before the public launch.

    How many Claude models are there?

    As of April 2026, Anthropic has released approximately 16 public model versions across 5 generations (Claude 1 through Claude 4.6). Three models are currently active: Opus 4.6, Sonnet 4.6, and Haiku 4.5.

    What was the best Claude model ever released?

    Claude Sonnet 4.6 (February 2026) holds the current highest benchmark scores and represents the peak of the Claude 4 generation. On SWE-bench Verified it scores 79.6% — among the highest of any model at its release.

  • Claude Updates April 2026: Claude 4 Deprecated, Cowork Live, 1M Context & More

    Claude Updates April 2026: Claude 4 Deprecated, Cowork Live, 1M Context & More

    Claude AI · Tygart Media · Updated April 2026
    This month’s biggest changes: Claude Sonnet 4 and Opus 4 (original 4.0 models) deprecated — retiring June 15, 2026. Cowork generally available on macOS and Windows. New plugin marketplace. Advisor tool in public beta. Computer use added to Cowork for Pro/Max users.

    Anthropic shipped a significant number of product updates in April 2026. This digest covers everything that changed — model deprecations, Cowork updates, Claude Code releases, and API additions — in one place. Bookmark this and check the Current Claude Model Tracker for the latest model strings.

    Model Changes

    Claude 4.0 Deprecation — Action Required by June 15

    Anthropic announced the deprecation of claude-sonnet-4-20250514 and claude-opus-4-20250514 — the original Claude 4.0 model versions from May 2025. Both retire from the Anthropic API on June 15, 2026. If you have either string in production code, migrate to claude-sonnet-4-6 and claude-opus-4-6 respectively. Full migration guide: Claude 4 Deprecation: What to Migrate To.

    1M Token Context Window — Now Generally Available

    The 1 million token context window for Claude Opus 4.6 and Claude Sonnet 4.6 is now generally available at standard pricing with no long-context surcharge. Previously in beta, this window supports approximately 750,000 words or about 2,500 pages of text in a single session. Also available on Vertex AI for both models.

    Cowork Updates

    Cowork Generally Available

    Claude Cowork reached general availability on macOS and Windows via Claude Desktop this month, exiting the research preview label. The GA release added expanded usage analytics, OpenTelemetry support for monitoring Cowork activity, and role-based access controls for Enterprise plans so admins can customize which Claude capabilities each team group can access.

    Computer Use in Cowork

    Pro and Max plan users can now give Claude access to computer use within Cowork — meaning Claude can open files, run dev tools, navigate browsers, point, click, and interact with what’s on screen to complete tasks autonomously. No setup required for Pro/Max users. This makes Cowork’s Dispatch feature substantially more capable, letting Claude take multi-step actions on your computer while you’re away.

    Scheduled and Recurring Tasks

    Cowork now supports creating and scheduling both recurring and on-demand tasks from within the app. Previously this required configuration outside the main interface. A new Customize section in Claude Desktop groups skills, plugins, and connectors in one place.

    Plugin Marketplace

    Anthropic launched a new plugin marketplace for Team and Enterprise plans with admin controls for managing which plugins are available to which users. Enterprise admins can approve, restrict, or block specific plugins org-wide.

    Claude Code Updates

    Vertex AI Setup Wizard

    Claude Code v2.1.98 and later include a /setup-vertex wizard that automates Google Cloud Vertex AI configuration — project selection, region, model pinning — without manually setting environment variables. Run claude --version to check if you’re on a supported version. Full setup guide: How to Run Claude Code on Vertex AI.

    Advisor Tool — Public Beta

    The Anthropic API now supports a public beta advisor tool (beta header: advisor-tool-2026-03-01). The pattern: pair a faster executor model with a higher-intelligence advisor model that provides strategic guidance mid-generation. Long-horizon agentic workloads get close to advisor-solo quality at executor-model costs. Useful for tasks where you want Opus-level reasoning with Sonnet-level speed on the bulk of token generation.

    Worktree Switching and PreCompact Hooks

    Claude Code added a path parameter to the EnterWorktree tool for switching into existing worktrees, PreCompact hook support (hooks can now block compaction by returning a decision block), and background monitor support for plugins via a top-level monitors manifest key.

    Interactive Connectors in Claude Mobile

    The Claude mobile app can now connect to fully interactive apps — live charts, diagrams, and shareable assets rendered visually inside conversations. Pull up live data, sketch diagrams, and build assets directly in the mobile chat interface.

    What to Watch in May 2026

    The June 15 deprecation deadline for Claude 4.0 models is the immediate action item for any team running the original 4.0 model strings. Claude 5 remains unannounced but expected Q2-Q3 2026 based on release cadence — see Claude 5 Release Date: What We Know. The advisor tool beta is worth testing for any team running complex agentic pipelines.

    What changed in Claude in April 2026?

    Key April 2026 changes: Claude 4.0 models deprecated (retiring June 15), Cowork reached general availability with computer use for Pro/Max users, 1M token context window became generally available, plugin marketplace launched, and the Vertex AI setup wizard shipped in Claude Code.

    What is the Claude Cowork update in April 2026?

    Cowork reached general availability with computer use for Pro/Max users, scheduled recurring tasks, a new plugin marketplace for Team/Enterprise, and enterprise role-based access controls. Previously in research preview.

  • How to Run Claude Code on Vertex AI Using Your GCP Credits

    How to Run Claude Code on Vertex AI Using Your GCP Credits

    Claude AI · Tygart Media
    What this sets up: Claude Code running through your Google Cloud account instead of the Anthropic API. Same models, same capabilities — billed to GCP. New GCP accounts can run this for free using $300 in signup credits.

    Claude Code is Anthropic’s terminal-native coding agent. By default it bills through your Anthropic account. But you can route it entirely through Google Cloud’s Vertex AI — meaning it charges your GCP account instead, and you can use existing GCP credits, startup credits, or free trial credits to run it at no incremental cost. Here’s the exact setup.

    What You Need Before Starting

    A Google Cloud account with a project created. Vertex AI API enabled on that project. Claude models requested and approved in Vertex AI Model Garden. Claude Code installed (npm install -g @anthropic-ai/claude-code). The gcloud CLI installed and authenticated. That’s it — no Anthropic API key required once this is configured.

    Step 1: Enable Vertex AI and Request Claude Model Access

    In the Google Cloud Console, go to Vertex AI > Model Garden and search for “Claude.” Request access to at least Claude Sonnet 4.6 (the primary Claude Code model) and Claude Haiku 4.5 (used for lightweight operations). Without Haiku, Claude Code will use Sonnet for everything — slower and more expensive for simple tasks. Enable Opus 4.6 as well if you need maximum capability for complex tasks.

    Model access approval is typically instant for most GCP accounts.

    Step 2: Authenticate with Google Cloud

    Run both commands below — the first authenticates your user account, the second sets application default credentials that Claude Code will pick up automatically:

    gcloud auth login
    gcloud auth application-default login

    Set your project: gcloud config set project YOUR-PROJECT-ID

    Enable the Vertex AI API: gcloud services enable aiplatform.googleapis.com

    Step 3: Configure Claude Code to Use Vertex AI

    Set these environment variables. On macOS/Linux, add them to your ~/.zshrc or ~/.bashrc. On Windows, use PowerShell’s [System.Environment]::SetEnvironmentVariable at the User level so they persist across sessions.

    macOS / Linux:
    export CLAUDE_CODE_USE_VERTEX=1
    export CLOUD_ML_REGION=global
    export ANTHROPIC_VERTEX_PROJECT_ID=your-project-id
    export ANTHROPIC_DEFAULT_SONNET_MODEL=claude-sonnet-4-6
    export ANTHROPIC_DEFAULT_HAIKU_MODEL=claude-haiku-4-5@20251001
    Windows (PowerShell — run once, persists across sessions):
    [System.Environment]::SetEnvironmentVariable("CLAUDE_CODE_USE_VERTEX","1","User")
    [System.Environment]::SetEnvironmentVariable("CLOUD_ML_REGION","global","User")
    [System.Environment]::SetEnvironmentVariable("ANTHROPIC_VERTEX_PROJECT_ID","your-project-id","User")

    Step 4: Verify the Setup

    Launch Claude Code and run /status. You should see API provider: Google Vertex AI and your GCP project ID. If you see the Anthropic API provider instead, your environment variables haven’t loaded — restart your terminal and try again.

    Step 5: Use the New Wizard (Claude Code v2.1.98+)

    If you’re on Claude Code version 2.1.98 or later, you can skip manual environment variable setup. Run /setup-vertex inside Claude Code and the wizard walks you through project selection, region, and model pinning automatically. Run claude --version to check your version first.

    Region Selection: Global vs Regional Endpoints

    Use CLOUD_ML_REGION=global unless you have specific compliance reasons to pin to a region. Global endpoints get the latest models first, have better availability, and don’t incur the 10% regional pricing premium. If you need data residency in a specific geography, use us-east5, us-central1, or europe-west1 — but verify your target Claude models are available in that region first, as not all models are available in all regions.

    Model Pinning for Teams

    If you’re deploying Claude Code to multiple team members, pin specific model versions rather than using aliases. Model aliases like “sonnet” resolve to the latest version, which may not be enabled in your Vertex AI project when Anthropic ships an update. Pinning prevents silent failures on update day:

    export ANTHROPIC_DEFAULT_SONNET_MODEL=claude-sonnet-4-6
    export ANTHROPIC_DEFAULT_HAIKU_MODEL=claude-haiku-4-5@20251001

    Common Error: 429 Resource Exhausted

    If you see 429 errors after setup, your project’s Vertex AI quota for Claude models needs to be increased. Go to Cloud Console > IAM & Admin > Quotas, filter by “anthropic,” and request an increase for the models you’re using. Approvals are typically fast for standard business accounts.

    Can I run Claude Code on Vertex AI for free?

    Yes if you have unused GCP credits. New Google Cloud accounts receive $300 in free credits. All GCP credits — startup programs, free trial, committed use discounts — apply to Claude usage through Vertex AI.

    Do I need an Anthropic API key to use Claude Code on Vertex AI?

    No. When configured for Vertex AI, Claude Code authenticates through your Google Cloud credentials (gcloud). No Anthropic API key is needed or used.

    Is Claude Code on Vertex AI slower than the direct Anthropic API?

    In practice, latency is comparable. The global endpoint routes dynamically and generally performs well. Regional endpoints may add slight latency depending on your geographic distance from the selected region.