Skip to main content
Three More Anthropic Releases. One Is Monumental.
Industry Insights|April 10, 20269 min read

Three More Anthropic Releases. One Is Monumental.

Three Anthropic releases in three days: Mythos Preview, Managed Agents public beta, and Cowork GA. Here is what each one means for businesses building on Claude.

OW

OneWave AI Team

AI Consulting

Three More Releases in Three Days. One Is Monumental.

Between April 7 and April 9, Anthropic shipped three products in as many days. Two of them are significant. One of them -- Claude Managed Agents combined with Claude Code -- changes the fundamental economics of building production AI agents. Not incrementally. By an order of magnitude.

We have been building on the Anthropic stack for over two years. We have watched releases come in waves -- models, integrations, developer tools. But this week felt different. Not because of the volume, but because of what the releases tell you about the direction. Anthropic is not just building a better chatbot. It is building the infrastructure for autonomous work, and this week was the clearest proof of that yet.

Here is a breakdown of all three releases and an honest read on what matters most for small and mid-sized businesses that are either already building on Claude or evaluating whether to start.

Managed Agents is not a feature update. It is an entirely new layer of the platform -- one that removes the single biggest barrier to deploying AI agents in production: the infrastructure you have to build and maintain yourself before a single line of agent logic can run.
Cloud computing infrastructure representing Claude Managed Agents hosted execution environment

Release One: Claude Mythos Preview (April 7)

Anthropic's most capable model yet. Mythos Preview is not publicly available, and according to Anthropic, it will not be anytime soon. Anthropic announced Mythos Preview on April 7 alongside Project Glasswing, a gated early-access program for defensive cybersecurity research limited to approximately 40 partner organizations.

Why the limited rollout

Mythos is not being held back because it is not ready. It is being held back because it is too capable in the wrong hands. During internal testing, Mythos found thousands of previously unknown zero-day vulnerabilities across every major operating system and every major web browser, exploiting them successfully on the first attempt in 83.1% of cases. It identified a 27-year-old flaw in OpenBSD that could allow a remote attacker to crash any machine running it. These are not theoretical demonstrations. They are documented exploits of live software billions of people use today.

The dual-use risk is real, so Anthropic's response was to restrict access to a carefully chosen list: Amazon, Apple, Microsoft, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Palo Alto Networks, and a handful of other organizations whose entire job is to defend against the kind of attacks Mythos can generate. Anthropic committed $100 million in usage credits and $4 million in direct donations to open-source security organizations as part of the initiative.

What it means for your business

For most SMBs, Mythos Preview is not relevant today. You cannot access it. But the existence of a model this capable -- and the reasoning behind keeping it gated -- tells you something important: the frontier has moved again, and the capabilities filtering down into commercially available models over the next twelve to eighteen months will be substantially more powerful than what you are using now. If you are still evaluating whether to build any AI workflows at all, this is a data point on what you are waiting for the competition to adopt while you deliberate. We have written before about why we chose Anthropic over OpenAI; Mythos is another reason that bet looks correct.


Release Two: Claude Managed Agents Public Beta (April 8)

This is the release that changes the most for businesses building production AI workflows. Claude Managed Agents launched in public beta on April 8, and it is the most significant infrastructure announcement Anthropic has made since the Messages API. According to Anthropic, the service shortens the development cycle for production agents from months to weeks. In our experience building agents for clients, that estimate is conservative for teams without dedicated DevOps support.

The problem it actually solves

If you have ever tried to build a production AI agent, you know the real work is not in the prompting. It is in everything around the prompting: constructing the agent loop, managing tool execution, handling sandboxing, persisting session state, standing up containers, writing error handling, and building observability into something that runs autonomously. Most teams we work with spend more engineering time on that scaffolding than on the actual agent logic they care about. Managed Agents takes that entire category of work off the table.

The mental model is four concepts. An Agent defines the model, system prompt, tools, MCP servers, and skills -- the logic layer. An Environment is a configured cloud container with pre-installed packages, language runtimes, and network access rules -- the runtime layer. A Session is a running instance of that agent working on a specific task -- the execution layer. And Events are the messages exchanged between your application and the agent: user turns, tool results, status updates -- the communication layer. You define the logic. Anthropic runs everything else. This connects directly to what we have described in What Is an AI Agent and Why Your Business Needs One -- the gap between understanding agents conceptually and actually deploying them in a way that holds up under real workloads. Managed Agents closes that gap substantially.

Agent Teams vs Subagents: the architecture decision that matters

Managed Agents supports two patterns for multi-agent work, and choosing the right one affects both performance and cost. Agent Teams coordinate multiple Claude instances with independent contexts, direct communication between agents, and a shared task list. They are suited for complex, parallelizable work -- researching ten vendors simultaneously, generating and reviewing a document at the same time, running parallel code analysis across a large repository. The tradeoff is token cost: multiple agents mean multiple context windows running concurrently.

Subagents operate within the same session as the main agent and only report results back to it. They are more economical for targeted, sequential tasks where parallelism does not add value. If your workflow is inherently linear -- extract, then format, then send -- Subagents are the right architecture. Understanding when each pattern applies is important for keeping Managed Agents costs predictable. We cover the underlying integration layer in our MCP Servers guide, which is worth reading alongside the Managed Agents documentation to understand how external tools connect into the environment.

Claude Code turns the entire build cycle into minutes

This is what makes Managed Agents monumental rather than just useful. Claude Code -- Anthropic's agentic coding environment -- has Managed Agents onboarding built directly into its claude-api skill. You describe what you want the agent to do. Claude Code scaffolds the agent definition, the environment configuration, and the session structure. You review it, adjust it, and deploy. The time from describing an agent in natural language to having it running in production is measured in minutes -- not the hours it took to hand-write the scaffolding, and not the days it took to stand up the infrastructure around it.

Before this combination existed, building a production agent meant weeks of work that had nothing to do with the agent itself: writing the loop, standing up a sandbox, wiring in tool execution, setting up checkpointing, building observability, maintaining all of it when something broke. Claude Code generates the scaffold. Managed Agents runs the runtime. That entire category of pre-deployment work is gone. Anthropic confirmed that Rakuten teams were deploying specialist agents -- finance, HR, sales, marketing, engineering -- within a week of getting access to the beta. With Claude Code handling the configuration layer, that window compresses further still. This is not an incremental speed improvement on existing workflows. It is a different order of magnitude.

What it costs

Pricing is $0.08 per session hour on top of standard Claude token costs. For most SMB workflows -- automated reporting, document processing, research tasks, lead qualification -- a session will run for seconds to a few minutes. You are not paying a monthly infrastructure fee for capacity you may not use. You pay for what you run, at the granularity of individual sessions. That is a meaningful shift from the traditional enterprise software pricing model and one that makes experimentation genuinely low-risk.

Who is already in production

Three companies confirmed production deployment at launch. Notion is using Managed Agents for collaborative workspace delegation -- letting teams assign multi-step work to Claude agents that operate asynchronously inside their docs and databases. Rakuten is running enterprise agents in Slack via Claude Cowork. Sentry is using Managed Agents for automated debugging in production environments, routing error reports to agents that diagnose the failure and propose a fix without human initiation. These are not demos. They are production deployments from companies with serious engineering standards. That signal matters.

Developer working with AI agent infrastructure on multiple monitors showing agent session orchestration

Release Three: Claude Cowork Goes GA (April 9)

Claude Cowork has been in research preview since its launch. On April 9, Anthropic made it generally available across all paid subscription tiers and added the enterprise controls that make organization-wide deployment viable rather than just individual power-user adoption.

What changed with the GA release

The core product -- a desktop AI agent that can see your screen, use your applications, and complete multi-step tasks across Mac and Windows -- has not changed. What changed is the governance layer around it. Role-based access controls now let administrators organize users into groups via SCIM integration with identity providers, then define which Claude capabilities each group can access. Admins can set group spend limits, view usage analytics through both the admin dashboard and the Analytics API, and apply granular connector permissions -- enabling read access while blocking write operations for specific teams, for example.

The expanded OpenTelemetry support is particularly significant for businesses with compliance or security requirements. Cowork now emits detailed event data compatible with the security and compliance tooling most enterprise organizations already run. If your IT or legal team has been holding up a broader Cowork rollout because the observability story was not clear enough, that objection is now answered.

The signal from Rakuten

Rakuten's confirmed production use of Cowork inside Slack -- announced alongside the Managed Agents launch -- is the most interesting real-world validation in this release cycle. Cowork running reliably inside enterprise communication workflows at scale, not just as a productivity tool for individual contributors, is a different product maturity claim entirely. We published a detailed breakdown of Claude Chat, Cowork, and Code when Cowork first launched. The GA release does not change that analysis, but it does mean the deployment conversation is now about rollout governance rather than platform readiness.


What These Three Releases Mean If You Are Building on Claude

We have helped businesses build their first AI agents across dozens of engagements. The pattern we see repeatedly is not a lack of use cases -- every business has workflows that are repetitive, rule-based, and time-consuming. The barrier is almost always infrastructure confidence. Teams understand what they want the agent to do. They do not have the engineering resources to build the environment it needs to run reliably in production without becoming someone's full-time job to maintain.

Managed Agents removes that constraint. The question is no longer "can we build this without a DevOps team?" The answer is yes. The question is now "which workflow do we start with?" That is a better problem to have, and it is the problem our AI strategy framework for SMBs is designed to answer. Start with the ugliest, most repetitive workflow you have. Define an agent. Define an environment. Run a session. The infrastructure is Anthropic's problem now.

The Cowork GA is equally significant for businesses that want to deploy AI across a whole team rather than just to individual power users. The governance and observability gaps that made organization-wide rollout risky are now closed. And Mythos Preview, while inaccessible to most businesses today, tells you that the capability ceiling on what will be commercially available in the near future is substantially higher than what you are currently budgeting for.

If you want a structured path into any of these releases for your specific business context, our 30-day client onboarding process is built around exactly this moment: identifying the right workflow, setting up the right infrastructure, and measuring results before the engagement ends. Reach out through the contact page if you want to talk through what that looks like for your business.

The barrier to deploying production AI agents just dropped significantly. Anthropic did not make Claude smarter this week -- they made it dramatically easier to ship agents that actually run. For every business that has been watching from the sidelines, that distinction is the one that removes your last excuse for waiting.
Claude Managed AgentsAnthropic April 2026Claude Cowork GAClaude MythosAI agents for businessAI agent infrastructureAnthropicOneWave AI
Share this article

Need help implementing AI?

OneWave AI helps small and mid-sized businesses adopt AI with practical, results-driven consulting. Talk to our team.

Get in Touch