Author: Will Tygart

  • Anthropic at Scale: 5 Gigawatts, $30B Revenue Run Rate, and What the Infrastructure Bet Means

    Anthropic at Scale: 5 Gigawatts, $30B Revenue Run Rate, and What the Infrastructure Bet Means

    Three data points published in the last two weeks of April 2026 define the scale at which Anthropic is now operating: a 5-gigawatt compute capacity commitment from Amazon announced April 20, a disclosed $30 billion annual revenue run rate (up from $9 billion at the end of 2025), and a customer base of more than 1,000 enterprises spending over $1 million per year. Taken together, they describe a company that has crossed the threshold from frontier AI lab to large-scale enterprise infrastructure provider.

    The Amazon Compute Commitment

    Five gigawatts of committed compute capacity is a number that requires context to land properly. For reference, a large data center campus typically consumes 100–500 megawatts. Five gigawatts is the equivalent of 10–50 large data center campuses worth of compute, committed to a single AI company. This is infrastructure at a scale that was historically reserved for hyperscalers building general-purpose cloud platforms — not AI model providers.

    The Amazon partnership is part of a broader compute story that also includes Google and Broadcom’s multi-gigawatt TPU partnership (announced April 6, with capacity launching in 2027). Anthropic is not building this infrastructure itself — it’s securing committed capacity from the two largest cloud providers simultaneously, which is a different and arguably more capital-efficient strategy than building proprietary data centers.

    Revenue: $9B to $30B in One Quarter

    The jump from $9 billion to $30 billion annualized run rate between end of 2025 and April 2026 is the most striking number in the disclosure. That’s not organic growth — that’s a step change that implies either a major enterprise contract cohort closing in Q1 2026, the Cowork and Claude Code adoption curves hitting inflection simultaneously, or both. The 1,000+ customers at $1 million+/year figure is consistent with enterprise adoption at scale: at $1 million average, 1,000 customers represents $1 billion in ARR from that cohort alone.

    For context on what $30 billion run rate means competitively: OpenAI disclosed approximately $3.7 billion in annualized revenue in mid-2024. If Anthropic’s figure is accurate and current, it suggests the competitive landscape has shifted more dramatically than most public coverage has reflected.

    What This Means for Enterprise Buyers

    Enterprise procurement teams evaluating AI vendors weigh financial stability heavily. A vendor that might not exist in 18 months is a vendor you don’t build critical workflows on. The combination of $30 billion run rate, 5 gigawatts of committed compute, and 1,000+ million-dollar customers removes the financial stability objection from the Anthropic procurement conversation in a way that a year ago it couldn’t.

    The Raj Narasimhan board appointment (April 14) is a governance signal in the same direction. Board composition at this revenue scale shapes how enterprise legal and compliance teams assess vendor risk. A mature board with enterprise-credible governance is a procurement unlock, not just a PR announcement.

    The Capacity Question

    The Google/Broadcom TPU capacity doesn’t launch until 2027. The Amazon commitment is a forward contract, not immediately available infrastructure. This means Anthropic is building compute capacity commitments ahead of demand — the right bet if the revenue trajectory continues, a costly overcommit if it doesn’t. The 2027 capacity launch timing will be worth watching against the actual demand curve that develops over the next 12 months.

    Source: Anthropic News

  • Claude Code Is Shipping 2–3 Releases Per Week — What the v2.1 Cadence Means for Engineering Teams

    Claude Code Is Shipping 2–3 Releases Per Week — What the v2.1 Cadence Means for Engineering Teams

    Between April 15 and April 29, 2026, the Claude Code team shipped releases from v2.1.89 to v2.1.123 — 34 version increments in 14 days, or roughly 2–3 production releases per week. For an agentic coding tool that engineering teams run in their daily development workflow, this release cadence is worth understanding, both for what it signals about the product’s development velocity and for the practical implications of staying current.

    What’s Driving the Cadence

    The v2.1 series is where Claude Code’s parallel agents architecture is being built out. The desktop redesign for parallel agents shipped on April 14, and the v2.1 releases since then represent the iterative work of making parallel agent workflows — running multiple agents simultaneously from a single workspace — stable and usable at production quality. Rapid iteration on a new architectural feature explains the compressed release schedule better than any other factor.

    The new onboarding guide for Claude Code teams, published April 28 on code.claude.com, is a related signal. Documentation for team-scale adoption typically follows (not precedes) the stability work that makes team-scale adoption advisable. Publishing the onboarding guide now suggests the team considers the core parallel agents architecture stable enough for broader engineering team adoption.

    Parallel Agents: The Architecture Change That Matters

    The April 14 desktop redesign for parallel agents is the most significant Claude Code architectural change of the quarter. Previously, Claude Code operated as a single-agent tool — one active task at a time per workspace. The parallel agents redesign allows developers to run multiple agents simultaneously, each working on independent tasks within the same workspace, with Claude coordinating between them.

    The practical applications are significant: running tests while implementing a feature, refactoring one module while debugging another, generating documentation in parallel with code review. Tasks that previously required sequential attention can now run concurrently, compressing the time from specification to working code.

    Implications for Engineering Teams Evaluating Adoption

    The combination of the new onboarding guide and the parallel agents architecture makes this the right moment for engineering teams that have been evaluating Claude Code to make a decision. The tool has moved from “impressive demo” to “documented team workflow” with the April 28 guide, and the parallel agents capability meaningfully changes the productivity math for teams doing complex, multi-threaded development work.

    For teams already using Claude Code, staying current with the v2.1 series matters more than it did in earlier versions. The 2–3 weekly releases aren’t cosmetic — they’re iterating on the parallel agents infrastructure that the most powerful new workflows depend on. Check the changelog at code.claude.com/docs/en/changelog before major projects to ensure you’re running a recent build.

    Source: Claude Code Changelog | GitHub Releases

  • Claude Mythos Preview and Project Glasswing: Anthropic’s Bet on AI-Powered Cyber Defense

    Claude Mythos Preview and Project Glasswing: Anthropic’s Bet on AI-Powered Cyber Defense

    On April 7, 2026, Anthropic published the Claude Mythos Preview to red.anthropic.com — its dedicated AI safety and security research channel. Mythos is described as a general-purpose model with breakthrough cybersecurity capability, anchoring a coordinated initiative called Project Glasswing aimed at reinforcing global cyber defenses using AI. It is the most significant security-focused model capability announcement Anthropic has made to date.

    What Mythos Is

    Mythos is not a separate product in the traditional sense — it’s a capability preview, published through Anthropic’s red team and security research channel rather than through the main product announcement pipeline. The “preview” framing is deliberate: Anthropic is signaling a new capability frontier to the security research community before making it broadly available, which is standard practice for capabilities with significant dual-use potential.

    The “breakthrough cybersecurity capability” claim is notable because Anthropic has historically been conservative about capability claims. Publishing on red.anthropic.com — rather than anthropic.com/news — also signals that this is targeted at a security-professional audience, not a general consumer or enterprise announcement.

    Project Glasswing

    Project Glasswing is the coordinated effort that Mythos anchors. The stated mission is reinforcing world cyber defenses — a framing that positions Mythos explicitly as a defensive capability rather than an offensive one, which matters enormously in how it will be received by governments, enterprise security teams, and the security research community.

    The name “Glasswing” references the glasswing butterfly — a species known for its transparent wings, which confer camouflage by blending into the environment. The metaphor maps cleanly onto defensive security work: visibility and transparency as the mechanism of protection, not opacity or force.

    Context: A Year of Security Work

    Mythos and Glasswing don’t come from nowhere. Anthropic’s security research track in 2026 has been unusually active: collaboration on Firefox CVE-2026-2796 in March, LLM-discovered zero-days published in February, and participation in AI on realistic cyber ranges in January — all documented on red.anthropic.com. Mythos is the capstone of a year-long research buildout in applied cybersecurity, not a pivot from Anthropic’s core safety work.

    For enterprise security teams evaluating AI vendors, this track record is a meaningful differentiator. Anthropic is now the only frontier AI lab with a documented, published history of responsible vulnerability disclosure collaboration and a dedicated security research publication channel. That institutional credibility matters when procurement decisions involve sensitive security workflows.

    What to Watch

    The Mythos Preview is the beginning of a story, not the end of one. Watch red.anthropic.com for the full Glasswing rollout cadence — what specific defensive capabilities are being published, what the access model looks like for security researchers, and whether government or critical infrastructure partnerships accompany the broader release. The preview framing implies a production release is coming. The timeline and access model will define how significant Glasswing becomes as a competitive differentiator.

    Source: red.anthropic.com — Claude Mythos Preview

  • Claude Opus 4.7: 3× Vision Resolution, Task Budgets, and the xhigh Effort Level Explained

    Claude Opus 4.7: 3× Vision Resolution, Task Budgets, and the xhigh Effort Level Explained

    Anthropic released Claude Opus 4.7 on April 16, 2026, alongside an update to Claude Haiku 4.5. The release is headlined by a 3× improvement in vision resolution, but the more operationally significant additions are task budgets and the new xhigh effort level — both of which change how developers can dial Claude’s reasoning intensity for compute-sensitive workflows.

    Vision Resolution: What 3× Actually Means

    Claude Opus 4.7 processes images at three times the resolution of its predecessor. In practice, this means documents with dense text, screenshots of complex interfaces, detailed charts and diagrams, and high-resolution photography are now meaningfully more legible to the model. Tasks that previously required cropping or pre-processing images to help Claude read fine details should now work with the original image.

    For enterprise use cases — contract review from scanned PDFs, financial statement analysis from images, medical imaging workflows, engineering diagram interpretation — the resolution improvement is not incremental. It crosses a threshold where image-based document processing becomes reliably useful rather than occasionally accurate.

    Task Budgets

    Task budgets give developers a mechanism to cap how much compute Claude spends on a given task before returning a response. This is the missing lever that has made Claude’s extended thinking mode difficult to use predictably in production. Without a budget ceiling, extended thinking tasks could run arbitrarily long and cost arbitrarily much. With task budgets, you can set a ceiling and get a best-effort response within that constraint rather than an open-ended spend.

    The practical implication is that extended thinking becomes viable in latency-sensitive or cost-sensitive production contexts that previously had to avoid it entirely. A customer-facing workflow that needs a thoughtful answer but can’t wait indefinitely can now specify a budget and get a response calibrated to that constraint.

    The xhigh Effort Level

    Alongside the existing effort levels, Opus 4.7 introduces xhigh — an above-maximum reasoning intensity setting intended for tasks where accuracy justifies extended compute time regardless of cost. Research tasks, complex multi-step reasoning chains, high-stakes analysis where a wrong answer is costly — these are the intended use cases.

    xhigh pairs naturally with task budgets: use xhigh to get the most thorough reasoning Claude can produce, and use a task budget to define the ceiling on how long it runs. Together they give developers precision control over the quality/cost/latency trade-off that was previously binary (extended thinking on or off).

    Pricing: Unchanged from 4.6

    Opus 4.7 maintains the same pricing as Claude Opus 4.6: $5 per million input tokens and $25 per million output tokens. For teams currently on Opus 4.6, this is an unambiguous upgrade — better vision, task budgets, and xhigh effort at the same cost. The Haiku 4.5 update released alongside it carries the same pricing-unchanged pattern.

    Deprecation note: Claude Haiku 3 was retired on April 19. Teams still on Haiku 3 should have already migrated — if not, that’s an urgent action item.

    Source: Anthropic — Claude Opus 4.7 Release

  • Managed Agents Now Have Built-In Memory — What Builders Should Test Before OpenAI Ships Its Version

    Managed Agents Now Have Built-In Memory — What Builders Should Test Before OpenAI Ships Its Version

    Anthropic’s Managed Agents service entered public beta with built-in persistent memory on April 23, 2026. The feature allows agents to retain context, user preferences, and state information across sessions — a capability that has been among the most-requested additions to the platform since Managed Agents launched. The timing matters: this ships during a window where OpenAI’s flagship memory features remain incomplete in their own agent frameworks, giving Claude developers a meaningful head start on production deployments that depend on memory.

    What Built-In Memory Actually Does

    Without memory, every agent session starts from zero. The agent knows what you’ve told it in the current conversation and nothing else. This is workable for single-session tasks — “summarize this document,” “write this draft” — but it breaks down for anything that involves ongoing relationships, accumulated preferences, or multi-session workflows. A customer service agent that can’t remember a user’s previous issues, a research assistant that can’t build on yesterday’s work, a scheduling agent that doesn’t know your standing preferences — all of these require memory to deliver the experience their use cases promise.

    Anthropic’s implementation provides persistence at the agent level, meaning the memory travels with the agent across sessions rather than requiring the developer to implement their own memory layer through external databases or custom retrieval logic. For builders who have been working around this limitation manually, the built-in version should substantially reduce implementation complexity.

    Why the Timing Against OpenAI Matters

    OpenAI has memory features in ChatGPT — the consumer product — but the developer-facing memory story for agents is less complete. The gap between what’s available to end users and what’s available to developers building on the platform has been a consistent criticism of OpenAI’s agent framework. Anthropic shipping built-in agent memory in public beta now, before OpenAI has an equivalent production-ready solution for agent builders, is a genuine competitive window.

    Public beta is not GA — there will be limitations, rough edges, and potential breaking changes before the feature stabilizes. But for developers who want to test and start building production workflows around persistent memory, this is the moment to start. Early adoption of beta features in platform infrastructure tends to compound: the teams that build on memory-enabled agents now will have a significant head start on the ones that wait for GA.

    What to Test Today

    The highest-value test cases for built-in memory in the current beta are: (1) customer-facing agents that need to remember user identity and history across sessions, (2) research or content agents that build knowledge bases over time, and (3) workflow agents that manage recurring tasks and need to track state between runs. These are the use cases where the absence of memory was most painful before, and where the new capability will show the largest delta in usefulness.

    Pair the memory beta with the new “Building production agents with MCP” guide published on April 22 — Anthropic’s documentation for hardening MCP-based agents for production deployments. The combination of persistent memory and production-hardening guidance suggests the platform team is intentionally building toward a moment when Managed Agents are ready for high-stakes, customer-facing production deployments. Test now, build with confidence later.

    Note on the 1M Token Context Beta

    Separately, the 1 million token context beta ends today, April 30. Developers who have been building on extended context should check the release notes for migration guidance before the beta window closes. This is the kind of quiet sunset that catches teams off-guard — worth a direct check against your current deployments today.

    Source: Anthropic Platform Release Notes

  • Anthropic’s APAC Quarter: Sydney, Tokyo, and the India Anchor

    Anthropic’s APAC Quarter: Sydney, Tokyo, and the India Anchor

    In the span of five days at the end of April 2026, Anthropic announced three significant moves in the Asia-Pacific region: a strategic multi-year collaboration with NEC for Japan’s AI workforce on April 24, a new Sydney office with Theo Hourmouzis named GM for Australia and New Zealand on April 27, and the Infosys partnership for regulated industry AI in India on April 29. Taken individually, each is a meaningful business development story. Taken together, they describe a deliberate APAC buildout strategy — and one that’s moving faster than most observers have credited.

    Japan: The NEC Partnership

    The NEC collaboration is structured around a multi-year deployment of Claude across Japanese enterprises, with a workforce upskilling component that distinguishes it from a pure technology licensing deal. NEC is a conglomerate with deep relationships across Japanese government, telecommunications, financial services, and defense — exactly the sectors where AI adoption is both highest-stakes and most cautious. The workforce upskilling angle suggests Anthropic and NEC are addressing the adoption bottleneck that has slowed enterprise AI deployment in Japan: the gap between what the technology can do and what the workforce knows how to ask it to do.

    Japan’s enterprise AI market is large, compliance-conscious, and historically resistant to foreign technology vendors without a local partnership anchor. NEC provides that anchor. This is structurally similar to the Infosys play in India — find the trusted domestic partner, build the Center of Excellence or equivalent, then scale through that partner’s existing enterprise relationships.

    Australia: The Sydney Office and Theo Hourmouzis

    Opening a Sydney office is the clearest signal of long-term commitment. Partnerships can be dissolved; physical offices and local headcount are harder to walk back. The appointment of Theo Hourmouzis as GM for Australia and New Zealand gives the APAC presence an executive face and a named accountability structure, which matters for enterprise procurement in both markets.

    Australia has been a strong early-adoption market for Claude — Singapore leads on per-capita usage metrics, but Australia’s enterprise market is larger and more English-language-first, which has historically meant faster Claude adoption than markets requiring significant localization work. A permanent office converts that early-adoption momentum into a defensible competitive position against OpenAI and Google, both of which have had APAC presence for longer.

    India: The Infosys Anchor

    The Infosys collaboration is covered in detail in a separate Tygart Media piece, but in the APAC context, its significance is as the India anchor to the same pattern playing out in Japan and Australia. Anthropic doesn’t yet have an India office announced — the Infosys partnership may be the substitute, at least initially, allowing Anthropic to access Indian enterprise relationships through Infosys’s existing client base without the overhead of a local office buildout.

    India’s developer market is the one piece of the APAC picture that the enterprise partnerships don’t fully address. The individual developer and startup pricing gap — INR 16,800/month for Claude Pro with no regional pricing adjustment — remains open and continues to generate friction in communities where Anthropic’s reputation is otherwise strong.

    What’s Missing: Singapore

    Singapore is notable by its absence in this APAC push. It consistently ranks as the highest per-capita Claude usage market globally, suggesting a user base that is already committed to the product. An office or partnership announcement in Singapore would be a natural complement to Sydney, but nothing has been announced. This is either a sequencing decision — Australia first, Singapore next — or a reflection of Singapore’s smaller enterprise market size relative to Japan, India, and Australia.

    Watch for a Singapore announcement in Q3 2026. The usage data makes it too obvious a gap to leave unfilled for long.

    Sources: Anthropic News | Infosys Press Release

  • Anthropic Plants Its Flag in Creative Tooling — What Claude for Creative Work Means for the Adobe Era

    Anthropic Plants Its Flag in Creative Tooling — What Claude for Creative Work Means for the Adobe Era

    Anthropic launched Claude for Creative Work on April 28, 2026, formalizing a product positioning that has been building since the Claude Design launch on April 17. The move puts Anthropic in direct competition with OpenAI’s image-generation-first creative pitch — but with a fundamentally different bet about what creative professionals actually need from AI.

    The Claude Design Foundation

    Claude Design, launched April 17 through Anthropic Labs, is the experimental product underneath the creative work positioning. It targets the quick-turnaround end of creative production: prototypes, slides, one-pagers, visual comps that need to exist fast without requiring a designer’s full attention. TechCrunch described it as “a new product for creating quick visuals” — which is accurate but undersells the strategic intent.

    Claude for Creative Work builds on top of Design by broadening the positioning to include writers, designers across disciplines, and creative professionals generally — not just the slide-deck-and-prototype use case that Design launched with.

    The Ecosystem Moat

    The creative tools landscape that Claude is entering isn’t neutral territory. Adobe, Blender, Autodesk, Ableton, and Splice represent decades of workflow lock-in across visual design, 3D, architecture and engineering, music production, and sample-based creation. Any AI tool that wants to be genuinely useful to creative professionals has to meet those workflows where they exist — as plugins, integrations, or API connections — rather than asking professionals to leave their primary tools.

    Anthropic’s approach appears to be positioning Claude as the intelligence layer that works alongside those tools rather than replacing them. This is a different bet than Midjourney or DALL-E, both of which are destination products — you go to them, generate something, and bring it back. Claude for Creative Work, by contrast, is pitched as the assistant that’s present throughout the creative process, across whatever tools the professional is already using.

    How This Differs from ChatGPT’s Creative Pitch

    OpenAI has led its creative positioning with image generation — GPT-4o’s image capabilities, the DALL-E integration, Sora for video. The implicit argument is that AI’s most valuable creative contribution is generating visual assets. Anthropic’s bet is different: that the more valuable creative contribution is the thinking, editing, structuring, and iteration that happens around asset generation, not the generation itself.

    For writers, this is an obvious win — Claude’s long-form reasoning and editing capabilities are measurably stronger than image-focused models on text tasks. For visual designers, the argument is less obvious but still coherent: a model that can critique a comp, suggest revisions, explain why a layout isn’t working, and draft the copy that sits alongside the visual is more useful across the whole project than a model that can only generate a new image.

    What to Watch

    Claude for Creative Work is a positioning launch more than a features launch — the underlying capabilities have been available for some time. The question is whether the positioning will be accompanied by the integration work that makes it real: native plugins for Adobe Creative Cloud, Ableton Live, Blender, and the other dominant creative tools. Without those integrations, “Claude for Creative Work” is a marketing frame. With them, it’s a genuine workflow play.

    Watch the Anthropic Labs pipeline for integration announcements over the next 60–90 days. That’s where the creative tools bet either gets substantiated or stalls.

    Sources: Anthropic News | TechCrunch — Claude Design

  • India’s Biggest IT Services Firm Picks Claude for Regulated AI — What the Infosys Partnership Means

    India’s Biggest IT Services Firm Picks Claude for Regulated AI — What the Infosys Partnership Means

    Infosys, India’s second-largest IT services company with over 300,000 employees and clients in virtually every regulated industry on the planet, announced a strategic collaboration with Anthropic on April 29, 2026. The partnership embeds Claude — including Claude Code — into Infosys Topaz AI, the company’s enterprise AI platform, targeting telecommunications, financial services, manufacturing, and software development verticals.

    What’s Actually Being Built

    The collaboration begins with a dedicated Anthropic Center of Excellence inside Infosys’s telecom practice. This isn’t a reseller agreement or a marketing partnership — it’s an engineering buildout. The Center of Excellence structure means Infosys is committing internal resources to develop Claude-powered workflows specific to telecom use cases, with the intent to replicate the model across the other three target verticals.

    Claude Code’s inclusion is significant. Enterprise AI deployments at IT services firms historically mean wrapping AI around existing workflows — summarization, document processing, customer-facing chatbots. Embedding Claude Code signals that Infosys is building AI into the software development lifecycle itself, which is where the highest-value, highest-margin work in IT services actually lives.

    Why Regulated Industries Are the Real Story

    Telecom, financial services, and manufacturing are three of the most compliance-heavy verticals in enterprise technology. Data residency requirements, audit trails, explainability mandates, and sector-specific regulations (TRAI in India, FCA in the UK, SEC in the US for financial services) make AI deployment substantially more complex than in unregulated industries. The fact that Infosys is leading with these verticals rather than easier targets suggests genuine confidence in Claude’s compliance posture.

    For the Indian developer and enterprise market specifically, this partnership carries weight that a US-only announcement would not. Infosys is a trusted name in Indian boardrooms in a way that American AI labs, even well-regarded ones, simply aren’t yet. Anthropic gaining Infosys as an integration partner is a significant step toward the kind of enterprise credibility that accelerates procurement decisions.

    The INR Pricing Gap Remains Open

    It’s worth noting what the Infosys partnership doesn’t solve: direct access pricing for Indian developers and individual subscribers. Claude’s consumer and API pricing in India remains at ₹16,800/month for Pro — a figure that has generated sustained criticism in developer communities and on GitHub (issue #17432 on the Claude feedback tracker has been open for months with no response). Enterprise deals like the Infosys collaboration typically involve custom pricing negotiated well below list, which means the developers who most need relief from INR pricing aren’t the ones who benefit from this announcement.

    That gap is a content opportunity and a legitimate market gap. Anthropic’s APAC expansion is clearly accelerating — Sydney office, NEC Japan partnership, now Infosys India — but the individual developer pricing story in the region hasn’t kept pace with the enterprise narrative.

    Context: Anthropic’s APAC Quarter

    The Infosys announcement is the third significant APAC move in the last two weeks. Anthropic opened a Sydney office and named Theo Hourmouzis as GM for Australia and New Zealand on April 27. The NEC Japan multi-year workforce upskilling collaboration was announced on April 24. Three moves in five days — India, Japan, Australia — is not coincidence. This is a coordinated APAC buildout, and Infosys is the India anchor.

    Source: Infosys Press Release

  • Cowork Is No Longer a Research Preview — Here’s What Changes for Non-Developers Today

    Cowork Is No Longer a Research Preview — Here’s What Changes for Non-Developers Today

    Anthropic’s Cowork feature — the desktop automation tool aimed squarely at non-developers — moved out of research preview on April 29, 2026, and is now generally available on both macOS and Windows. It ships with a feature set that represents a meaningful step forward for anyone who has been running scheduled tasks, file workflows, and multi-step automations through Claude without writing a line of code.

    What’s New in the GA Release

    The GA release lands on Pro, Max, Team, and Enterprise plans. The headline additions are expanded analytics, OpenTelemetry support for enterprise observability, and role-based access controls — the last of these being the signal that Cowork is now ready for team deployments, not just individual power users.

    Persistent agent threads are now live across both mobile (iOS and Android) and desktop, which means you can start a Cowork task on your laptop and monitor or manage it from your phone. The new Customize section consolidates skills, plugins, and connectors into a single panel, replacing what was previously a scattered setup experience across multiple menus.

    Recurring and on-demand task scheduling is also included, enabling the kind of “set it and check it” automation workflows that Cowork was always promising but only partially delivering during the preview period.

    Why This Matters for Non-Developers

    Cowork’s core bet has always been that the most valuable use cases for AI automation don’t belong to engineers — they belong to operators, marketers, content teams, and business owners who know exactly what they want done but have no interest in writing Python scripts or JSON configs to get there. The GA release validates that bet with a production-grade infrastructure story: OpenTelemetry means IT and enterprise security teams can audit what the agents are doing; role-based access controls mean managers can delegate without handing over full system access.

    For the non-developer using Cowork day-to-day, the practical change is reliability. Research previews carry an implicit asterisk — “this works, mostly, until it doesn’t.” GA means the feature is supported, documented, and subject to real SLAs. Scheduled tasks that have been running through the preview period should now be more stable, and new automations can be built with the expectation that they’ll still work next month.

    The Enterprise Observability Story

    The addition of Cowork data into the Analytics API and OpenTelemetry support is worth noting separately. This is the detail that unlocks enterprise adoption at scale. Procurement and security teams at larger organizations have consistently asked for auditability before green-lighting AI automation tools. Cowork now has an answer: every agent action can be traced, logged, and routed into whatever observability stack the enterprise already runs.

    For Team and Enterprise plan subscribers, this should accelerate internal approval processes for Cowork deployments that may have stalled during the preview period.

    What Stays the Same

    The fundamental Cowork model — Claude running autonomous tasks on behalf of the user, triggered by schedule or on-demand, guided by skills and connectors — is unchanged. If you’ve been running workflows in the preview, the transition to GA should be seamless. The Customize section reorganizes the setup experience but doesn’t require rebuilding existing configurations.

    Plans and pricing remain unchanged from the research preview tier placement — Cowork is included in Pro, Max, Team, and Enterprise, with no new add-on cost announced alongside the GA release.

    The Bottom Line

    Cowork GA is the milestone that turns a promising experiment into a product you can build operational workflows around. The combination of persistent threads, role-based access, and OpenTelemetry support brings Cowork into alignment with what enterprise buyers require from any automation tool they’re willing to run at scale. For individual users, the reliability improvement and the cleaner Customize panel are the day-one wins. For teams, the observability story is the green light many have been waiting for.

    Source: Anthropic Cowork Release Notes

  • Cascade View: South Everett’s Quietly Stable Neighborhood Most Outsiders Drive Through Without Noticing

    Last updated: April 30, 2026 | Cascade View is the south Everett neighborhood most outsiders drive through on Everett Mall Way without ever noticing it has a name. The 6,391 people who live there know better.

    Where it sits: Cascade View is a primarily residential south Everett neighborhood bounded on its southern and western edges by Everett Mall Way and Evergreen Way, with Twin Creeks immediately to the east and Mill Creek a short drive to the south. Population is about 6,391; median home sale prices run around $765,000 in the most recent twelve-month window — up roughly 30 percent year over year. The neighborhood association meets quarterly under chair Michael Trujillo, who also chairs the adjoining Twin Creeks association.

    The Neighborhood People Drive Through to Get Somewhere Else

    If you’ve ever pulled off I-5 at Everett Mall Way to grab a coffee or hit the mall, you’ve been in Cascade View. Most people don’t realize it. The neighborhood doesn’t announce itself with the kind of arterial signage Boulevard Bluffs or Northwest Everett gets, and the commercial frontage along Everett Mall Way reads more like “south Everett retail strip” than “residential neighborhood with a name and a chair.”

    But step a couple blocks back from the arterial and Cascade View turns into one of south Everett’s most stable single-family residential pockets. The streets curve. The lots are wider than the apartment-dense corridors closer to Casino Road. The trees are mature. The dogs get walked. It’s the kind of neighborhood that gets quietly recommended to families relocating to the Everett area who want decent schools, a manageable commute, and a price point south of the city’s historic core.

    Where Cascade View Begins and Ends

    Cascade View sits in the southeast corner of the City of Everett, northeast of Mill Creek and northwest of Twin Creeks. The neighborhood’s southern and western borders are formed by Everett Mall Way and Evergreen Way — the two arterials that funnel commuters between south Everett, Mill Creek, and I-5. To the east, the neighborhood butts up against the Twin Creeks corridor; to the north, the neighborhood feeds into the broader south Everett residential grid.

    The whole footprint is about 1,522 occupied housing units, per the most recent demographic estimates available through Point2Homes and Niche. Of those, 60.8 percent are owner-occupied — a higher rate than south Everett’s apartment-dense corridors closer to Casino Road, but lower than the historic-core neighborhoods like Northwest Everett or Port Gardner. The remaining 39.2 percent are renter-occupied, which is consistent with what you’d expect from a neighborhood that’s mostly single-family but has a meaningful supply of duplexes and townhomes mixed in.

    The People Who Live Here

    Cascade View skews younger than Everett as a whole. The median age is 35, and adults between 25 and 44 make up about 32.2 percent of the neighborhood — the family-formation cohort. Another 23.6 percent are between 45 and 64, and roughly 13 percent are 65 and older. Average household income in 2023, the most recent year of full data, came in at $126,102.

    Demographically, Cascade View is among the more diverse residential pockets in south Everett. Roughly 56.1 percent of residents identify as White, 16.5 percent as Asian, and 6 percent as Black. About 70.4 percent of residents are U.S.-born citizens, 15.9 percent are naturalized citizens, and 13.7 percent are non-citizens — a profile that tracks closely with the broader south Everett pattern documented in the desk’s coverage of Stations Unidos and the Casino Road corridor.

    What a Cascade View Home Costs

    The neighborhood’s housing market has moved sharply over the past year. Per Homes.com’s most recent twelve-month rolling data, the median sale price for a Cascade View home was about $765,457 — up roughly 30 percent over the prior twelve-month period. NeighborhoodScout’s broader estimate puts the median real estate price closer to $643,898, reflecting different methodology and a larger sample window. Either figure tells the same basic story: Cascade View is no longer the entry-level south Everett bargain it was a decade ago.

    Rentals are a similar story. Average rent in Cascade View runs around $2,855 — meaningfully above Everett’s citywide average, but a notch below comparable Mill Creek and Lynnwood pricing. The math reflects the neighborhood’s position: residential enough to feel like a real neighborhood, accessible enough to I-5 and Everett Mall Way that it doesn’t carry the “you’ll need a car for everything” tax some of the more remote pockets do.

    The Neighborhood Association — Quarterly, Not Monthly

    The Cascade View Neighborhood Association is one of the more active in south Everett. Chair Michael Trujillo — a longtime fixture on Everett’s Council of Neighborhoods — currently chairs both Cascade View and the adjoining Twin Creeks association, with the explicit hope that a Twin Creeks resident will eventually step up so the two seats can be split again.

    Starting in 2023, the association shifted from monthly meetings to quarterly Community Meetings — a format the chair has said is meant to bring civic leaders directly into the neighborhood: Everett Police, Everett Fire, Everett Parks, and Everett Traffic departments cycle through the agenda alongside neighborhood updates. The quarterly cadence is also more sustainable for a volunteer-run association in a neighborhood where most adults are working full time and raising kids.

    Meeting dates and locations are published on the City of Everett’s neighborhood calendar at everettwa.gov/384/Cascade-View and on the association’s public Facebook page. Anyone who lives within the neighborhood boundaries can attend.

    Schools, Parks, and the Everyday

    Cascade View students are split between two school districts depending on the address — a quirk south Everett families know well. Some streets feed into Everett Public Schools and Cascade High; others fall inside Mukilteo School District boundaries and feed Mariner High School. The Mukilteo SD lookup at mukilteoschools.org/37434_3 is the cleanest way to confirm which district a given Cascade View address belongs to.

    For green space, the neighborhood is well-positioned. Forest Park is a short drive north on Evergreen Way, and the regional draw of Thornton A. Sullivan Park at Silver Lake is a quick hop to the northeast. Day-to-day errands run through Everett Mall and the surrounding retail along Everett Mall Way, which means most Cascade View households can hit groceries, hardware, and a coffee shop without getting on I-5.

    The Quiet Recommendation

    If you talk to long-term Cascade View residents, the recommendation comes out the same way every time: it’s a neighborhood that delivers the practical version of what people say they want when they say they’re looking for a neighborhood. Walkable streets without being downtown. Diverse without being transient. Stable without being stagnant. A volunteer chair who actually shows up. A market that’s appreciating, but not so fast that long-time owners feel taxed out.

    Cascade View is the next neighborhood on the city’s 19-neighborhood list to get a standalone spotlight on this desk — and after years of being the south Everett pocket people drive through to reach Mill Creek, that feels overdue.

    Frequently Asked Questions

    Where is the Cascade View neighborhood in Everett?

    Cascade View is a south Everett neighborhood located northeast of Mill Creek and northwest of Twin Creeks. Its southern and western borders are formed by Everett Mall Way and Evergreen Way. The neighborhood is part of the City of Everett’s 19 official neighborhoods and is administered through the Office of Neighborhoods.

    What is the population of Cascade View?

    Cascade View has a population of about 6,391, with roughly 1,522 occupied housing units. About 60.8 percent of those units are owner-occupied and 39.2 percent are renter-occupied. The median age is 35, and the average household income in 2023 was $126,102.

    How much do homes in Cascade View cost?

    The median sale price for a Cascade View home over the past twelve months was about $765,457, up roughly 30 percent year over year, per Homes.com data. NeighborhoodScout’s broader median real estate estimate is closer to $643,898, reflecting a longer sample window. Average rent in the neighborhood is around $2,855.

    Does the Cascade View Neighborhood Association still meet?

    Yes. The association shifted from monthly meetings to quarterly Community Meetings starting in 2023, with civic leaders from Everett Police, Fire, Parks, and Traffic departments cycling through agenda time. Chair Michael Trujillo also currently chairs the adjoining Twin Creeks association. Meeting dates are published on the City of Everett’s Cascade View page at everettwa.gov/384/Cascade-View.

    Which school district serves Cascade View?

    Cascade View is split between Everett Public Schools and Mukilteo School District depending on the address. Some streets feed into Cascade High School (EPS); others feed into Mariner High School (Mukilteo SD). The Mukilteo SD address lookup at mukilteoschools.org/37434_3 is the cleanest way to confirm which district a specific Cascade View address belongs to.