At the Code with Claude conference on May 6, Anthropic announced a Managed Agents feature called Dreaming. The press covered it briefly — VentureBeat, 9to5Mac — but mostly as a developer story. The Harvey result (a legal AI company reporting roughly a 6× task completion rate increase) was cited but not unpacked. This is the non-developer version of that story, written for people who run workflows, manage operations, or use Claude professionally without writing code.
What Dreaming Actually Does
Here’s the mechanism in plain terms. Normally, when an AI agent finishes a session, it’s done. Whatever it learned — the patterns it noticed, the decisions it made, the context that turned out to matter — stays in that session and disappears when the session closes. The next session starts fresh.
Dreaming changes that. After a session ends, the agent reviews what happened: it reads its own memory store alongside the session transcripts and produces a new, improved version of its memory. Duplicates are merged. Stale information is replaced. New patterns that emerged from the session get incorporated. The next session doesn’t start from scratch — it starts from a richer, more accurate knowledge base.
The Anthropic documentation describes it this way: a dream reads an existing memory store alongside past session transcripts, then produces a new reorganized memory store with insights no single session could see alone. Docs: platform.claude.com/docs/en/managed-agents/dreams.
This is a developer-layer feature — it requires implementation, not just subscribing to a plan. But understanding what it does helps you ask the right questions about the tools you’re evaluating and the agents you’re eventually going to run.
Why Harvey’s 6× Result Is the Right Hook
Harvey is a legal AI company. Their workflows are exactly the kind of work where this matters: complex research tasks that span multiple sessions, with context that compounds over time. A lawyer doesn’t approach a new matter without the knowledge they’ve accumulated from previous matters. Historically, AI agents did. Each new session was a blank slate.
Harvey reported roughly a 6× task completion rate increase after implementing Dreaming. That’s not a benchmark number from a controlled test — it’s a production system showing measurable improvement from session-to-session memory refinement. The mechanism is the same as how human expertise compounds: not by accumulating raw experience, but by periodically synthesizing and reorganizing what’s been learned.
Whether 6× holds across every use case is unknown. The direction of the effect is the signal. Agents that improve between sessions outperform agents that don’t. That gap widens over time.
The Cowork Parallel
We run our own Cowork setup — Claude operating scheduled tasks, content pipelines, and site management workflows on our behalf. The Dreaming announcement is relevant to us not because we’re going to implement it today (it’s developer preview, invitation-only access), but because it’s the roadmap signal for where agentic AI is heading.
The systems we’re building now — Cowork routines, scheduled tasks, skill libraries — are the foundation that Dreaming-style memory will eventually sit on top of. Agents that accumulate context across sessions. Workflows that get better at your job the more you run them. That’s the direction. The Harvey result is the first public production evidence that the direction is real.
What This Looks Like for Non-Developer Workflows
Dreaming isn’t in consumer Claude products yet — it’s a developer preview. But the pattern it represents is worth thinking about now for anyone who uses AI in recurring work:
- Legal and compliance work: Each matter builds on prior matter context. An agent that synthesizes what it learned from 50 prior research sessions before starting the 51st is doing something closer to what an experienced associate does.
- Operations and project management: Recurring status meetings, weekly reports, vendor communication — these have patterns. An agent that notices “the Friday report always needs these three data sources” and incorporates that into its working memory doesn’t need to be told again.
- Content and editorial work: Our own content pipeline is a clear example. Style preferences, site-specific constraints, recurring topic clusters — knowledge that currently lives in skill files and desk specs. Dreaming is the mechanism that would let an agent accumulate and refine that knowledge from session experience rather than requiring it to be manually specified.
- Customer-facing workflows: Agents that handle recurring customer interactions and improve their response quality based on what worked in prior sessions — without a human having to manually update a prompt each time something changes.
Current Access Status
To be direct about where this stands today:
- Dreaming: Developer preview only. Invitation-based access. Not available in claude.ai or any subscription tier.
- Multiagent Orchestration: Public beta. Available via the Claude API.
- Outcomes: Public beta. Available via the Claude API.
If you’re not a developer implementing your own Claude agents, Dreaming isn’t something you can use yet. It will become relevant when it moves to GA and when products built on top of it surface in tools you already use. The Harvey result is the preview of what those products will eventually be able to do.
Our Take
The briefing note we wrote when this story broke said: “Dreaming is the story the press mostly missed.” The Harvey 6× result landed in VentureBeat but was treated as a developer-tier data point. We think it’s more broadly significant than that.
What makes expertise valuable isn’t the accumulation of raw information — it’s the synthesis. A junior lawyer with access to the same case law as a senior partner isn’t equally useful, because the senior partner has synthesized 20 years of patterns into a working model that guides their reasoning. Dreaming is Anthropic’s attempt to give agents a version of that synthesis capability. It’s early, it’s developer preview, and the 6× figure is from one company’s specific workflow. But the direction is clear, and it’s the right direction.
For anyone building with Claude or evaluating where agentic AI is heading: this is the development worth tracking most closely from the May 6 announcement. Not the SpaceX rate limits (immediately useful), not the Managed Agents public beta (available now), but Dreaming — because it’s the piece that changes the fundamental model of how AI agents improve over time.
Frequently Asked Questions
What is Claude Dreaming?
Dreaming is a Claude Managed Agents feature (developer preview as of May 2026) that lets AI agents review and reorganize their own memory between sessions. After a session ends, the agent reads its memory store alongside session transcripts and produces an improved memory store — merging duplicates, replacing stale information, and surfacing patterns from the session. The next session starts with a richer knowledge base than the previous one ended with.
What did Harvey report about Dreaming?
Harvey, a legal AI company, reported roughly a 6× task completion rate increase after implementing Dreaming in their Managed Agents workflow. Harvey’s use case involves complex legal research spanning multiple sessions — exactly the kind of work where session-to-session memory improvement has the highest value.
Can I use Dreaming in claude.ai?
No. As of May 2026, Dreaming is a developer preview available only to selected developers implementing their own Claude agents via the Anthropic API. It is not available in the claude.ai interface or through any subscription tier.
How is Dreaming different from Claude’s memory feature in claude.ai?
Claude’s memory feature in claude.ai extracts key facts from conversations and injects them into future sessions as a summary. Dreaming is a more sophisticated agent-layer system where the agent itself reviews and reorganizes its full memory store and session history, producing a restructured knowledge base — not just a collection of extracted facts. They serve different purposes at different layers of the stack.
When will Dreaming be available to non-developers?
Anthropic hasn’t announced a GA timeline for Dreaming. It will likely surface in consumer and professional products after the developer preview phase completes and the implementation patterns are well understood. Harvey’s result suggests the mechanism works in production; the path to broader availability depends on how Anthropic packages it for non-developer deployment.
Leave a Reply