Category: Anthropic

News, analysis, and profiles covering Anthropic the company and its team.

  • Anthropic at Scale: 5 Gigawatts, $30B Revenue Run Rate, and What the Infrastructure Bet Means

    Anthropic at Scale: 5 Gigawatts, $30B Revenue Run Rate, and What the Infrastructure Bet Means

    Three data points published in the last two weeks of April 2026 define the scale at which Anthropic is now operating: a 5-gigawatt compute capacity commitment from Amazon announced April 20, a disclosed $30 billion annual revenue run rate (up from $9 billion at the end of 2025), and a customer base of more than 1,000 enterprises spending over $1 million per year. Taken together, they describe a company that has crossed the threshold from frontier AI lab to large-scale enterprise infrastructure provider.

    The Amazon Compute Commitment

    Five gigawatts of committed compute capacity is a number that requires context to land properly. For reference, a large data center campus typically consumes 100–500 megawatts. Five gigawatts is the equivalent of 10–50 large data center campuses worth of compute, committed to a single AI company. This is infrastructure at a scale that was historically reserved for hyperscalers building general-purpose cloud platforms — not AI model providers.

    The Amazon partnership is part of a broader compute story that also includes Google and Broadcom’s multi-gigawatt TPU partnership (announced April 6, with capacity launching in 2027). Anthropic is not building this infrastructure itself — it’s securing committed capacity from the two largest cloud providers simultaneously, which is a different and arguably more capital-efficient strategy than building proprietary data centers.

    Revenue: $9B to $30B in One Quarter

    The jump from $9 billion to $30 billion annualized run rate between end of 2025 and April 2026 is the most striking number in the disclosure. That’s not organic growth — that’s a step change that implies either a major enterprise contract cohort closing in Q1 2026, the Cowork and Claude Code adoption curves hitting inflection simultaneously, or both. The 1,000+ customers at $1 million+/year figure is consistent with enterprise adoption at scale: at $1 million average, 1,000 customers represents $1 billion in ARR from that cohort alone.

    For context on what $30 billion run rate means competitively: OpenAI disclosed approximately $3.7 billion in annualized revenue in mid-2024. If Anthropic’s figure is accurate and current, it suggests the competitive landscape has shifted more dramatically than most public coverage has reflected.

    What This Means for Enterprise Buyers

    Enterprise procurement teams evaluating AI vendors weigh financial stability heavily. A vendor that might not exist in 18 months is a vendor you don’t build critical workflows on. The combination of $30 billion run rate, 5 gigawatts of committed compute, and 1,000+ million-dollar customers removes the financial stability objection from the Anthropic procurement conversation in a way that a year ago it couldn’t.

    The Raj Narasimhan board appointment (April 14) is a governance signal in the same direction. Board composition at this revenue scale shapes how enterprise legal and compliance teams assess vendor risk. A mature board with enterprise-credible governance is a procurement unlock, not just a PR announcement.

    The Capacity Question

    The Google/Broadcom TPU capacity doesn’t launch until 2027. The Amazon commitment is a forward contract, not immediately available infrastructure. This means Anthropic is building compute capacity commitments ahead of demand — the right bet if the revenue trajectory continues, a costly overcommit if it doesn’t. The 2027 capacity launch timing will be worth watching against the actual demand curve that develops over the next 12 months.

    Source: Anthropic News

  • Claude Code Is Shipping 2–3 Releases Per Week — What the v2.1 Cadence Means for Engineering Teams

    Claude Code Is Shipping 2–3 Releases Per Week — What the v2.1 Cadence Means for Engineering Teams

    Between April 15 and April 29, 2026, the Claude Code team shipped releases from v2.1.89 to v2.1.123 — 34 version increments in 14 days, or roughly 2–3 production releases per week. For an agentic coding tool that engineering teams run in their daily development workflow, this release cadence is worth understanding, both for what it signals about the product’s development velocity and for the practical implications of staying current.

    What’s Driving the Cadence

    The v2.1 series is where Claude Code’s parallel agents architecture is being built out. The desktop redesign for parallel agents shipped on April 14, and the v2.1 releases since then represent the iterative work of making parallel agent workflows — running multiple agents simultaneously from a single workspace — stable and usable at production quality. Rapid iteration on a new architectural feature explains the compressed release schedule better than any other factor.

    The new onboarding guide for Claude Code teams, published April 28 on code.claude.com, is a related signal. Documentation for team-scale adoption typically follows (not precedes) the stability work that makes team-scale adoption advisable. Publishing the onboarding guide now suggests the team considers the core parallel agents architecture stable enough for broader engineering team adoption.

    Parallel Agents: The Architecture Change That Matters

    The April 14 desktop redesign for parallel agents is the most significant Claude Code architectural change of the quarter. Previously, Claude Code operated as a single-agent tool — one active task at a time per workspace. The parallel agents redesign allows developers to run multiple agents simultaneously, each working on independent tasks within the same workspace, with Claude coordinating between them.

    The practical applications are significant: running tests while implementing a feature, refactoring one module while debugging another, generating documentation in parallel with code review. Tasks that previously required sequential attention can now run concurrently, compressing the time from specification to working code.

    Implications for Engineering Teams Evaluating Adoption

    The combination of the new onboarding guide and the parallel agents architecture makes this the right moment for engineering teams that have been evaluating Claude Code to make a decision. The tool has moved from “impressive demo” to “documented team workflow” with the April 28 guide, and the parallel agents capability meaningfully changes the productivity math for teams doing complex, multi-threaded development work.

    For teams already using Claude Code, staying current with the v2.1 series matters more than it did in earlier versions. The 2–3 weekly releases aren’t cosmetic — they’re iterating on the parallel agents infrastructure that the most powerful new workflows depend on. Check the changelog at code.claude.com/docs/en/changelog before major projects to ensure you’re running a recent build.

    Source: Claude Code Changelog | GitHub Releases

  • Claude Mythos Preview and Project Glasswing: Anthropic’s Bet on AI-Powered Cyber Defense

    Claude Mythos Preview and Project Glasswing: Anthropic’s Bet on AI-Powered Cyber Defense

    On April 7, 2026, Anthropic published the Claude Mythos Preview to red.anthropic.com — its dedicated AI safety and security research channel. Mythos is described as a general-purpose model with breakthrough cybersecurity capability, anchoring a coordinated initiative called Project Glasswing aimed at reinforcing global cyber defenses using AI. It is the most significant security-focused model capability announcement Anthropic has made to date.

    What Mythos Is

    Mythos is not a separate product in the traditional sense — it’s a capability preview, published through Anthropic’s red team and security research channel rather than through the main product announcement pipeline. The “preview” framing is deliberate: Anthropic is signaling a new capability frontier to the security research community before making it broadly available, which is standard practice for capabilities with significant dual-use potential.

    The “breakthrough cybersecurity capability” claim is notable because Anthropic has historically been conservative about capability claims. Publishing on red.anthropic.com — rather than anthropic.com/news — also signals that this is targeted at a security-professional audience, not a general consumer or enterprise announcement.

    Project Glasswing

    Project Glasswing is the coordinated effort that Mythos anchors. The stated mission is reinforcing world cyber defenses — a framing that positions Mythos explicitly as a defensive capability rather than an offensive one, which matters enormously in how it will be received by governments, enterprise security teams, and the security research community.

    The name “Glasswing” references the glasswing butterfly — a species known for its transparent wings, which confer camouflage by blending into the environment. The metaphor maps cleanly onto defensive security work: visibility and transparency as the mechanism of protection, not opacity or force.

    Context: A Year of Security Work

    Mythos and Glasswing don’t come from nowhere. Anthropic’s security research track in 2026 has been unusually active: collaboration on Firefox CVE-2026-2796 in March, LLM-discovered zero-days published in February, and participation in AI on realistic cyber ranges in January — all documented on red.anthropic.com. Mythos is the capstone of a year-long research buildout in applied cybersecurity, not a pivot from Anthropic’s core safety work.

    For enterprise security teams evaluating AI vendors, this track record is a meaningful differentiator. Anthropic is now the only frontier AI lab with a documented, published history of responsible vulnerability disclosure collaboration and a dedicated security research publication channel. That institutional credibility matters when procurement decisions involve sensitive security workflows.

    What to Watch

    The Mythos Preview is the beginning of a story, not the end of one. Watch red.anthropic.com for the full Glasswing rollout cadence — what specific defensive capabilities are being published, what the access model looks like for security researchers, and whether government or critical infrastructure partnerships accompany the broader release. The preview framing implies a production release is coming. The timeline and access model will define how significant Glasswing becomes as a competitive differentiator.

    Source: red.anthropic.com — Claude Mythos Preview

  • Claude Opus 4.7: 3× Vision Resolution, Task Budgets, and the xhigh Effort Level Explained

    Claude Opus 4.7: 3× Vision Resolution, Task Budgets, and the xhigh Effort Level Explained

    Anthropic released Claude Opus 4.7 on April 16, 2026, alongside an update to Claude Haiku 4.5. The release is headlined by a 3× improvement in vision resolution, but the more operationally significant additions are task budgets and the new xhigh effort level — both of which change how developers can dial Claude’s reasoning intensity for compute-sensitive workflows.

    Vision Resolution: What 3× Actually Means

    Claude Opus 4.7 processes images at three times the resolution of its predecessor. In practice, this means documents with dense text, screenshots of complex interfaces, detailed charts and diagrams, and high-resolution photography are now meaningfully more legible to the model. Tasks that previously required cropping or pre-processing images to help Claude read fine details should now work with the original image.

    For enterprise use cases — contract review from scanned PDFs, financial statement analysis from images, medical imaging workflows, engineering diagram interpretation — the resolution improvement is not incremental. It crosses a threshold where image-based document processing becomes reliably useful rather than occasionally accurate.

    Task Budgets

    Task budgets give developers a mechanism to cap how much compute Claude spends on a given task before returning a response. This is the missing lever that has made Claude’s extended thinking mode difficult to use predictably in production. Without a budget ceiling, extended thinking tasks could run arbitrarily long and cost arbitrarily much. With task budgets, you can set a ceiling and get a best-effort response within that constraint rather than an open-ended spend.

    The practical implication is that extended thinking becomes viable in latency-sensitive or cost-sensitive production contexts that previously had to avoid it entirely. A customer-facing workflow that needs a thoughtful answer but can’t wait indefinitely can now specify a budget and get a response calibrated to that constraint.

    The xhigh Effort Level

    Alongside the existing effort levels, Opus 4.7 introduces xhigh — an above-maximum reasoning intensity setting intended for tasks where accuracy justifies extended compute time regardless of cost. Research tasks, complex multi-step reasoning chains, high-stakes analysis where a wrong answer is costly — these are the intended use cases.

    xhigh pairs naturally with task budgets: use xhigh to get the most thorough reasoning Claude can produce, and use a task budget to define the ceiling on how long it runs. Together they give developers precision control over the quality/cost/latency trade-off that was previously binary (extended thinking on or off).

    Pricing: Unchanged from 4.6

    Opus 4.7 maintains the same pricing as Claude Opus 4.6: $5 per million input tokens and $25 per million output tokens. For teams currently on Opus 4.6, this is an unambiguous upgrade — better vision, task budgets, and xhigh effort at the same cost. The Haiku 4.5 update released alongside it carries the same pricing-unchanged pattern.

    Deprecation note: Claude Haiku 3 was retired on April 19. Teams still on Haiku 3 should have already migrated — if not, that’s an urgent action item.

    Source: Anthropic — Claude Opus 4.7 Release

  • Managed Agents Now Have Built-In Memory — What Builders Should Test Before OpenAI Ships Its Version

    Managed Agents Now Have Built-In Memory — What Builders Should Test Before OpenAI Ships Its Version

    Anthropic’s Managed Agents service entered public beta with built-in persistent memory on April 23, 2026. The feature allows agents to retain context, user preferences, and state information across sessions — a capability that has been among the most-requested additions to the platform since Managed Agents launched. The timing matters: this ships during a window where OpenAI’s flagship memory features remain incomplete in their own agent frameworks, giving Claude developers a meaningful head start on production deployments that depend on memory.

    What Built-In Memory Actually Does

    Without memory, every agent session starts from zero. The agent knows what you’ve told it in the current conversation and nothing else. This is workable for single-session tasks — “summarize this document,” “write this draft” — but it breaks down for anything that involves ongoing relationships, accumulated preferences, or multi-session workflows. A customer service agent that can’t remember a user’s previous issues, a research assistant that can’t build on yesterday’s work, a scheduling agent that doesn’t know your standing preferences — all of these require memory to deliver the experience their use cases promise.

    Anthropic’s implementation provides persistence at the agent level, meaning the memory travels with the agent across sessions rather than requiring the developer to implement their own memory layer through external databases or custom retrieval logic. For builders who have been working around this limitation manually, the built-in version should substantially reduce implementation complexity.

    Why the Timing Against OpenAI Matters

    OpenAI has memory features in ChatGPT — the consumer product — but the developer-facing memory story for agents is less complete. The gap between what’s available to end users and what’s available to developers building on the platform has been a consistent criticism of OpenAI’s agent framework. Anthropic shipping built-in agent memory in public beta now, before OpenAI has an equivalent production-ready solution for agent builders, is a genuine competitive window.

    Public beta is not GA — there will be limitations, rough edges, and potential breaking changes before the feature stabilizes. But for developers who want to test and start building production workflows around persistent memory, this is the moment to start. Early adoption of beta features in platform infrastructure tends to compound: the teams that build on memory-enabled agents now will have a significant head start on the ones that wait for GA.

    What to Test Today

    The highest-value test cases for built-in memory in the current beta are: (1) customer-facing agents that need to remember user identity and history across sessions, (2) research or content agents that build knowledge bases over time, and (3) workflow agents that manage recurring tasks and need to track state between runs. These are the use cases where the absence of memory was most painful before, and where the new capability will show the largest delta in usefulness.

    Pair the memory beta with the new “Building production agents with MCP” guide published on April 22 — Anthropic’s documentation for hardening MCP-based agents for production deployments. The combination of persistent memory and production-hardening guidance suggests the platform team is intentionally building toward a moment when Managed Agents are ready for high-stakes, customer-facing production deployments. Test now, build with confidence later.

    Note on the 1M Token Context Beta

    Separately, the 1 million token context beta ends today, April 30. Developers who have been building on extended context should check the release notes for migration guidance before the beta window closes. This is the kind of quiet sunset that catches teams off-guard — worth a direct check against your current deployments today.

    Source: Anthropic Platform Release Notes

  • Anthropic’s APAC Quarter: Sydney, Tokyo, and the India Anchor

    Anthropic’s APAC Quarter: Sydney, Tokyo, and the India Anchor

    In the span of five days at the end of April 2026, Anthropic announced three significant moves in the Asia-Pacific region: a strategic multi-year collaboration with NEC for Japan’s AI workforce on April 24, a new Sydney office with Theo Hourmouzis named GM for Australia and New Zealand on April 27, and the Infosys partnership for regulated industry AI in India on April 29. Taken individually, each is a meaningful business development story. Taken together, they describe a deliberate APAC buildout strategy — and one that’s moving faster than most observers have credited.

    Japan: The NEC Partnership

    The NEC collaboration is structured around a multi-year deployment of Claude across Japanese enterprises, with a workforce upskilling component that distinguishes it from a pure technology licensing deal. NEC is a conglomerate with deep relationships across Japanese government, telecommunications, financial services, and defense — exactly the sectors where AI adoption is both highest-stakes and most cautious. The workforce upskilling angle suggests Anthropic and NEC are addressing the adoption bottleneck that has slowed enterprise AI deployment in Japan: the gap between what the technology can do and what the workforce knows how to ask it to do.

    Japan’s enterprise AI market is large, compliance-conscious, and historically resistant to foreign technology vendors without a local partnership anchor. NEC provides that anchor. This is structurally similar to the Infosys play in India — find the trusted domestic partner, build the Center of Excellence or equivalent, then scale through that partner’s existing enterprise relationships.

    Australia: The Sydney Office and Theo Hourmouzis

    Opening a Sydney office is the clearest signal of long-term commitment. Partnerships can be dissolved; physical offices and local headcount are harder to walk back. The appointment of Theo Hourmouzis as GM for Australia and New Zealand gives the APAC presence an executive face and a named accountability structure, which matters for enterprise procurement in both markets.

    Australia has been a strong early-adoption market for Claude — Singapore leads on per-capita usage metrics, but Australia’s enterprise market is larger and more English-language-first, which has historically meant faster Claude adoption than markets requiring significant localization work. A permanent office converts that early-adoption momentum into a defensible competitive position against OpenAI and Google, both of which have had APAC presence for longer.

    India: The Infosys Anchor

    The Infosys collaboration is covered in detail in a separate Tygart Media piece, but in the APAC context, its significance is as the India anchor to the same pattern playing out in Japan and Australia. Anthropic doesn’t yet have an India office announced — the Infosys partnership may be the substitute, at least initially, allowing Anthropic to access Indian enterprise relationships through Infosys’s existing client base without the overhead of a local office buildout.

    India’s developer market is the one piece of the APAC picture that the enterprise partnerships don’t fully address. The individual developer and startup pricing gap — INR 16,800/month for Claude Pro with no regional pricing adjustment — remains open and continues to generate friction in communities where Anthropic’s reputation is otherwise strong.

    What’s Missing: Singapore

    Singapore is notable by its absence in this APAC push. It consistently ranks as the highest per-capita Claude usage market globally, suggesting a user base that is already committed to the product. An office or partnership announcement in Singapore would be a natural complement to Sydney, but nothing has been announced. This is either a sequencing decision — Australia first, Singapore next — or a reflection of Singapore’s smaller enterprise market size relative to Japan, India, and Australia.

    Watch for a Singapore announcement in Q3 2026. The usage data makes it too obvious a gap to leave unfilled for long.

    Sources: Anthropic News | Infosys Press Release

  • Anthropic Plants Its Flag in Creative Tooling — What Claude for Creative Work Means for the Adobe Era

    Anthropic Plants Its Flag in Creative Tooling — What Claude for Creative Work Means for the Adobe Era

    Anthropic launched Claude for Creative Work on April 28, 2026, formalizing a product positioning that has been building since the Claude Design launch on April 17. The move puts Anthropic in direct competition with OpenAI’s image-generation-first creative pitch — but with a fundamentally different bet about what creative professionals actually need from AI.

    The Claude Design Foundation

    Claude Design, launched April 17 through Anthropic Labs, is the experimental product underneath the creative work positioning. It targets the quick-turnaround end of creative production: prototypes, slides, one-pagers, visual comps that need to exist fast without requiring a designer’s full attention. TechCrunch described it as “a new product for creating quick visuals” — which is accurate but undersells the strategic intent.

    Claude for Creative Work builds on top of Design by broadening the positioning to include writers, designers across disciplines, and creative professionals generally — not just the slide-deck-and-prototype use case that Design launched with.

    The Ecosystem Moat

    The creative tools landscape that Claude is entering isn’t neutral territory. Adobe, Blender, Autodesk, Ableton, and Splice represent decades of workflow lock-in across visual design, 3D, architecture and engineering, music production, and sample-based creation. Any AI tool that wants to be genuinely useful to creative professionals has to meet those workflows where they exist — as plugins, integrations, or API connections — rather than asking professionals to leave their primary tools.

    Anthropic’s approach appears to be positioning Claude as the intelligence layer that works alongside those tools rather than replacing them. This is a different bet than Midjourney or DALL-E, both of which are destination products — you go to them, generate something, and bring it back. Claude for Creative Work, by contrast, is pitched as the assistant that’s present throughout the creative process, across whatever tools the professional is already using.

    How This Differs from ChatGPT’s Creative Pitch

    OpenAI has led its creative positioning with image generation — GPT-4o’s image capabilities, the DALL-E integration, Sora for video. The implicit argument is that AI’s most valuable creative contribution is generating visual assets. Anthropic’s bet is different: that the more valuable creative contribution is the thinking, editing, structuring, and iteration that happens around asset generation, not the generation itself.

    For writers, this is an obvious win — Claude’s long-form reasoning and editing capabilities are measurably stronger than image-focused models on text tasks. For visual designers, the argument is less obvious but still coherent: a model that can critique a comp, suggest revisions, explain why a layout isn’t working, and draft the copy that sits alongside the visual is more useful across the whole project than a model that can only generate a new image.

    What to Watch

    Claude for Creative Work is a positioning launch more than a features launch — the underlying capabilities have been available for some time. The question is whether the positioning will be accompanied by the integration work that makes it real: native plugins for Adobe Creative Cloud, Ableton Live, Blender, and the other dominant creative tools. Without those integrations, “Claude for Creative Work” is a marketing frame. With them, it’s a genuine workflow play.

    Watch the Anthropic Labs pipeline for integration announcements over the next 60–90 days. That’s where the creative tools bet either gets substantiated or stalls.

    Sources: Anthropic News | TechCrunch — Claude Design

  • India’s Biggest IT Services Firm Picks Claude for Regulated AI — What the Infosys Partnership Means

    India’s Biggest IT Services Firm Picks Claude for Regulated AI — What the Infosys Partnership Means

    Infosys, India’s second-largest IT services company with over 300,000 employees and clients in virtually every regulated industry on the planet, announced a strategic collaboration with Anthropic on April 29, 2026. The partnership embeds Claude — including Claude Code — into Infosys Topaz AI, the company’s enterprise AI platform, targeting telecommunications, financial services, manufacturing, and software development verticals.

    What’s Actually Being Built

    The collaboration begins with a dedicated Anthropic Center of Excellence inside Infosys’s telecom practice. This isn’t a reseller agreement or a marketing partnership — it’s an engineering buildout. The Center of Excellence structure means Infosys is committing internal resources to develop Claude-powered workflows specific to telecom use cases, with the intent to replicate the model across the other three target verticals.

    Claude Code’s inclusion is significant. Enterprise AI deployments at IT services firms historically mean wrapping AI around existing workflows — summarization, document processing, customer-facing chatbots. Embedding Claude Code signals that Infosys is building AI into the software development lifecycle itself, which is where the highest-value, highest-margin work in IT services actually lives.

    Why Regulated Industries Are the Real Story

    Telecom, financial services, and manufacturing are three of the most compliance-heavy verticals in enterprise technology. Data residency requirements, audit trails, explainability mandates, and sector-specific regulations (TRAI in India, FCA in the UK, SEC in the US for financial services) make AI deployment substantially more complex than in unregulated industries. The fact that Infosys is leading with these verticals rather than easier targets suggests genuine confidence in Claude’s compliance posture.

    For the Indian developer and enterprise market specifically, this partnership carries weight that a US-only announcement would not. Infosys is a trusted name in Indian boardrooms in a way that American AI labs, even well-regarded ones, simply aren’t yet. Anthropic gaining Infosys as an integration partner is a significant step toward the kind of enterprise credibility that accelerates procurement decisions.

    The INR Pricing Gap Remains Open

    It’s worth noting what the Infosys partnership doesn’t solve: direct access pricing for Indian developers and individual subscribers. Claude’s consumer and API pricing in India remains at ₹16,800/month for Pro — a figure that has generated sustained criticism in developer communities and on GitHub (issue #17432 on the Claude feedback tracker has been open for months with no response). Enterprise deals like the Infosys collaboration typically involve custom pricing negotiated well below list, which means the developers who most need relief from INR pricing aren’t the ones who benefit from this announcement.

    That gap is a content opportunity and a legitimate market gap. Anthropic’s APAC expansion is clearly accelerating — Sydney office, NEC Japan partnership, now Infosys India — but the individual developer pricing story in the region hasn’t kept pace with the enterprise narrative.

    Context: Anthropic’s APAC Quarter

    The Infosys announcement is the third significant APAC move in the last two weeks. Anthropic opened a Sydney office and named Theo Hourmouzis as GM for Australia and New Zealand on April 27. The NEC Japan multi-year workforce upskilling collaboration was announced on April 24. Three moves in five days — India, Japan, Australia — is not coincidence. This is a coordinated APAC buildout, and Infosys is the India anchor.

    Source: Infosys Press Release

  • Cowork Is No Longer a Research Preview — Here’s What Changes for Non-Developers Today

    Cowork Is No Longer a Research Preview — Here’s What Changes for Non-Developers Today

    Anthropic’s Cowork feature — the desktop automation tool aimed squarely at non-developers — moved out of research preview on April 29, 2026, and is now generally available on both macOS and Windows. It ships with a feature set that represents a meaningful step forward for anyone who has been running scheduled tasks, file workflows, and multi-step automations through Claude without writing a line of code.

    What’s New in the GA Release

    The GA release lands on Pro, Max, Team, and Enterprise plans. The headline additions are expanded analytics, OpenTelemetry support for enterprise observability, and role-based access controls — the last of these being the signal that Cowork is now ready for team deployments, not just individual power users.

    Persistent agent threads are now live across both mobile (iOS and Android) and desktop, which means you can start a Cowork task on your laptop and monitor or manage it from your phone. The new Customize section consolidates skills, plugins, and connectors into a single panel, replacing what was previously a scattered setup experience across multiple menus.

    Recurring and on-demand task scheduling is also included, enabling the kind of “set it and check it” automation workflows that Cowork was always promising but only partially delivering during the preview period.

    Why This Matters for Non-Developers

    Cowork’s core bet has always been that the most valuable use cases for AI automation don’t belong to engineers — they belong to operators, marketers, content teams, and business owners who know exactly what they want done but have no interest in writing Python scripts or JSON configs to get there. The GA release validates that bet with a production-grade infrastructure story: OpenTelemetry means IT and enterprise security teams can audit what the agents are doing; role-based access controls mean managers can delegate without handing over full system access.

    For the non-developer using Cowork day-to-day, the practical change is reliability. Research previews carry an implicit asterisk — “this works, mostly, until it doesn’t.” GA means the feature is supported, documented, and subject to real SLAs. Scheduled tasks that have been running through the preview period should now be more stable, and new automations can be built with the expectation that they’ll still work next month.

    The Enterprise Observability Story

    The addition of Cowork data into the Analytics API and OpenTelemetry support is worth noting separately. This is the detail that unlocks enterprise adoption at scale. Procurement and security teams at larger organizations have consistently asked for auditability before green-lighting AI automation tools. Cowork now has an answer: every agent action can be traced, logged, and routed into whatever observability stack the enterprise already runs.

    For Team and Enterprise plan subscribers, this should accelerate internal approval processes for Cowork deployments that may have stalled during the preview period.

    What Stays the Same

    The fundamental Cowork model — Claude running autonomous tasks on behalf of the user, triggered by schedule or on-demand, guided by skills and connectors — is unchanged. If you’ve been running workflows in the preview, the transition to GA should be seamless. The Customize section reorganizes the setup experience but doesn’t require rebuilding existing configurations.

    Plans and pricing remain unchanged from the research preview tier placement — Cowork is included in Pro, Max, Team, and Enterprise, with no new add-on cost announced alongside the GA release.

    The Bottom Line

    Cowork GA is the milestone that turns a promising experiment into a product you can build operational workflows around. The combination of persistent threads, role-based access, and OpenTelemetry support brings Cowork into alignment with what enterprise buyers require from any automation tool they’re willing to run at scale. For individual users, the reliability improvement and the cleaner Customize panel are the day-one wins. For teams, the observability story is the green light many have been waiting for.

    Source: Anthropic Cowork Release Notes

  • Task Budgets, xhigh, and the 2,576px Vision Ceiling: Opus 4.7’s Most Interesting Features Explained

    Task Budgets, xhigh, and the 2,576px Vision Ceiling: Opus 4.7’s Most Interesting Features Explained

    What this article covers

    Three features in Opus 4.7 deserve their own explanation because they change what’s actually possible in daily work, not just what’s bigger on a benchmark chart:

    1. Task budgets (beta) — per-subtask ceilings that tame agent cost variance.
    2. The xhigh effort level — the new reasoning-control setting between high and max.
    3. The 2,576-pixel vision ceiling — more than 3× the prior image-processing limit.

    Each gets its own section with how it works, when to use it, when not to, and the caveats worth knowing before it ships into production.


    Feature 1: Task budgets (beta)

    What it is. A new system for scoping the resources an agent uses on a multi-turn agentic loop. Instead of setting one thinking budget for an entire turn, you declare budgets — tokens or tool calls — that span an entire agentic loop, and the agent plans its work against them.

    The problem it solves. Agent runs have notoriously high cost variance. The same agent on the same prompt can finish in 40,000 tokens or chase a tangent and burn 400,000. Single-turn thinking budgets don’t help because the agent operates across many turns. Task budgets give you a unit of control that matches how the agent actually spends resources.

    How the agent uses them. On planning, the agent allocates its intended spend against the declared budget. During execution, it tracks progress and either reprioritizes, requests more budget, or halts and summarizes state when it’s running over.

    Behavior note: budgets are soft, not hard. The agent is nudged to respect them, not hard-cut. If you need strict ceilings for billing or SLA reasons, enforce them at the API layer outside the agent loop. Task budgets are for behavior shaping, not hard resource limiting.

    When to use them.
    – Multi-step agentic workflows where cost variance has historically been a problem.
    – Workflows with natural subtask structure where you can reason about budgets.
    – Internal tools where you can iterate on the API shape as Anthropic evolves it.

    When not to use them.
    – Simple single-turn requests. Task budgets are overhead that doesn’t pay off on short interactions.
    – Production contracts that are painful to version. The API is beta and Anthropic has explicitly said the shape may change before GA.
    – Workflows where you need provable hard cutoffs. Enforce those at the API layer, not via this feature.

    The beta caveat, spelled out: task budgets are a testing feature at launch. Parameter names and shape may change. Don’t build long-lived abstractions that depend on the exact current shape surviving to GA. Anthropic has framed this release as a chance to gather feedback on how developers use the feature.


    Feature 2: The xhigh effort level

    What it is. A new setting for reasoning effort, slotted between high and max. Opus 4.6 had three levels: low, medium, high. Opus 4.7 adds xhigh, making four: low, medium, high, xhigh, plus max at the top.

    Why it exists. Anthropic’s framing in the release materials: xhigh gives users “finer control over the tradeoff between reasoning and latency on hard problems.” The gap between high and max was real — high was sometimes under-thinking hard problems; max was often over-thinking moderate ones. xhigh smooths the curve by giving you a setting that’s more thoughtful than high without the runaway token budget of max.

    Anthropic’s own guidance. “When testing Opus 4.7 for coding and agentic use cases, we recommend starting with high or xhigh effort.” That’s a direct recommendation to make xhigh part of your default rotation for serious work, not a niche escalation.

    How to use it.
    – Keep high as the default for routine work.
    – Use xhigh as the new first-choice escalation when high isn’t quite getting there — or start there for coding and agentic tasks per Anthropic’s recommendation.
    – Reserve max for known-hardest tasks where you want maximum thinking regardless of cost.

    Important tradeoff. Higher effort levels in 4.7 produce more output tokens than the same levels did in 4.6. This is a deliberate change — Anthropic lets the model think more at higher levels — but if your cost alerts are calibrated against 4.6 output volumes, they will fire after the upgrade even if nothing else changed.

    An API note worth flagging. Opus 4.7 removed the extended thinking budget parameter that existed in 4.6. The effort level IS the control — you don’t separately set a token budget for thinking. If your 4.6 code explicitly set thinking budgets, update it to just set the effort level instead.

    xhigh is available via API, Bedrock, Vertex AI, and Microsoft Foundry. On Claude.ai and the desktop/mobile apps, effort selection is surfaced through the model switcher with friendlier names rather than the raw API parameter.


    Feature 3: The 2,576-pixel vision ceiling

    What changed. Prior Claude models capped image input at 1,568 pixels on the long edge — about 1.15 megapixels. Opus 4.7 processes images up to 2,576 pixels on the long edge — about 3.75 megapixels, more than 3× the prior pixel budget.

    Why this matters more than it sounds. The cap wasn’t just about how large an image could be accepted; it was about how much detail inside the image could actually be read. Under the old 1.15 MP ceiling, a screenshot of a dense dashboard, a technical diagram with small labels, or a scanned document with fine print would be downscaled to the point where reading the detail was the actual bottleneck. 4.7 removes that bottleneck for images up to the new ceiling.

    Coordinate mapping is now 1:1. This is a separate but related change. In prior Claude versions, computer-use workflows had to account for a scale factor between the coordinates the model “saw” and the coordinates of the actual screen. On Opus 4.7, the model’s coordinate output maps 1:1 to actual image pixels. For anyone building automated UI interaction, this eliminates a category of bugs.

    What this enables that 4.6 struggled with:

    • Dense UI screenshots. Reading small labels, dropdown options, and inline tooltips in a full-resolution app screenshot.
    • Technical diagrams. Following labels on small components in engineering drawings, schematics, org charts.
    • Scanned documents. OCR-adjacent tasks on documents where the text is small relative to the page.
    • Chart details. Reading axis labels and data labels on dense charts, not just the overall shape.
    • Multi-panel content. Comics, infographics, and documents with small type in multiple zones.
    • Pointing, measuring, counting. Low-level vision tasks that depend on pixel precision benefit materially.
    • Bounding-box detection. Image localization tasks show clear gains.

    What it doesn’t change.

    • Images beyond 2,576px still get downscaled to the ceiling. The ceiling is higher; it’s not gone.
    • Video frames are handled differently and aren’t covered by this change.
    • Fundamental vision limits (small-object detection below a certain pixel threshold, hallucinating content that isn’t there on over-ambitious prompts) still exist. More pixels ≠ omniscience.

    Pricing and token cost. Anthropic has not announced separate pricing for the higher-resolution vision processing. Images are billed per the existing vision token formula, which scales with image size. Larger images cost more tokens; that’s not new. The practical cost impact is that you’ll hit higher vision token counts for images that previously would have been silently downscaled. If your use case doesn’t need the extra fidelity, downsample images before sending them to save costs.

    How to use it.

    Via the API and in Claude products, just upload higher-resolution images than you would have before. No special parameter. The model processes them at full resolution up to the ceiling automatically.

    response = client.messages.create(
        model="claude-opus-4-7",
        max_tokens=4096,
        messages=[{
            "role": "user",
            "content": [
                {"type": "image", "source": {...}},  # up to 2576px long edge
                {"type": "text", "text": "Extract the values from the chart."},
            ],
        }],
    )
    

    A caveat worth noting. The 2,576px ceiling is the processing ceiling. Client-side size limits (file size, API request size) still apply. Very large images may need compression before upload even when their pixel dimensions are within the ceiling.


    How these three features compose

    The three features aren’t independent. For agentic coding work in particular, they compose in ways that matter.

    A practical workflow: an agent reviewing a UI bug gets a screenshot of the bug state (vision at 2,576px captures the detail), thinks about it at xhigh effort (enough reasoning without max’s overhead), and runs under a task budget that caps how much it can spend on this particular investigation before escalating or returning. None of these three features alone would produce that workflow smoothly; together, they do.

    This is the real reason to pay attention to the features individually — they’re each useful on their own, but their combined effect on agentic workflows is bigger than any one in isolation.


    Frequently asked questions

    Are task budgets available on Claude.ai, or API only?
    API only. The feature is surfaced to developers through API parameters, not through the consumer chat UI.

    Can I use xhigh on Claude.ai?
    Effort level is exposed to consumers through the model switcher. The underlying xhigh value is available via API; the consumer surface uses friendlier naming rather than the raw parameter.

    Does the 2,576px vision ceiling apply to all Claude products?
    Yes — Claude.ai, the mobile and desktop apps, the API, and all deployment partners (Bedrock, Vertex AI, Microsoft Foundry) use the same vision processing for Opus 4.7.

    Are task budgets a replacement for max_tokens?
    No. max_tokens is a hard cap on output length for a single message. Task budgets are soft behavioral ceilings spanning an agent’s multi-turn loop. Use both.

    Does xhigh use a different API parameter than high?
    No — it’s just another value for the same effort parameter. Note that Opus 4.7 removed the separate extended thinking budget parameter that existed on 4.6: the effort level IS the thinking control on 4.7.

    Will these features come to Opus 4.6?
    No. They’re Opus 4.7 features. 4.6 continues to run on its prior behavior.

    Does xhigh cost more than high?
    Yes, indirectly. Per-token pricing is the same. But xhigh produces more output tokens on hard problems (that’s the point — more thinking), so a given request costs more at xhigh than at high. xhigh is still meaningfully cheaper than max on the same task.


    Related reading

    • The full release: Claude Opus 4.7 — Everything New
    • For developers: Opus 4.7 for coding in practice
    • Comparison: Opus 4.7 vs GPT-5.4 vs Gemini 3.1 Pro
    • The Mythos angle: why Anthropic admitted Opus 4.7 is weaker than an unreleased model

    Published April 16, 2026. Article written by Claude Opus 4.7.