The First Four Months of 2026 Were Not Quiet
Most years in tech, the first quarter is a warm-up. Companies regroup, plans get drafted, and the real work ships in the second half. This year, the first four months were the show. Two of the four largest venture rounds in history closed before April was over. Both leading AI labs rebuilt their product lineups around agents. A frontier model dropped roughly every six weeks. And the regulatory clock on the EU AI Act kept ticking toward an August deadline that quietly reshapes how anyone with European customers ships software.
We get the same question from clients every Monday: what actually changed last week, and does any of it matter to me? So here is the recap. Cited, dated, and filtered for what an SMB or mid-market team should actually do about it.
The frontier model is moving so fast it has stopped being a useful headline. The story of 2026 so far is not which model is smarter. It is which company can ship a product around the model before the model becomes a commodity again.

Money First: Q1 Was a Liquidity Event for AI
It is hard to read anything else about this year correctly without first understanding the capital that landed. According to Crunchbase's Q1 2026 sector snapshot, global venture investment hit $300 billion in the quarter, with $242 billion -- 80% of every venture dollar -- going to AI companies. Four rounds alone absorbed $188 billion: OpenAI, Anthropic, xAI, and Waymo.
OpenAI closed a $122 billion round at an $852 billion post-money valuation on March 31, an increase from the $110 billion the company had announced a month earlier. Anthropic raised a $30 billion Series G in February at a $380 billion valuation, and TechCrunch reported on April 30 that a roughly $50 billion follow-on round could close within two weeks at a valuation near $900 billion. xAI took $20 billion in Series E. Perplexity quietly raised a $500 million Series D from Sequoia at an $11 billion valuation.
Q1 2026: Four Rounds Took 65% of Global Venture
$188B across four deals. The largest concentration of capital in venture history.
For SMB leaders, the practical takeaway is not the headline numbers. It is that the labs you are buying from are now permanently in the "build the next product, not the next demo" phase. Pricing pressure from competition will continue to favor you. So will the speed of new capabilities. The trade-off is that the surface area of what is possible has grown faster than most teams can absorb it -- which is why a recap like this one is worth keeping handy.
January: The Cowork Era Begins
The year opened with Anthropic shifting Claude from chat tool to operating environment. On January 12, Anthropic launched Claude Cowork in research preview, a desktop product that lets non-developers grant Claude access to a folder, a browser, and a scoped set of tools, then assign multi-step tasks. It packaged the computer-use capability that had been buried in Claude Code into a surface a knowledge worker could actually use, and it made the agent loop visible. The Aragon Research and AI Business writeups both treat it as the moment when "agent" stopped being a demo word.
Two weeks later, Anthropic announced a partnership making Claude the default model behind ServiceNow's Build Agent, and was selected to develop a public-sector AI assistant for GOV.UK. CEO Dario Amodei published a long essay titled The Adolescence of Technology, including findings on "alignment faking" observed during Opus 4 testing -- a rare moment of an AI lab publishing uncomfortable internal research while shipping a product on top of the same model.
At the same time, xAI closed its $20 billion Series E, and the EU AI Office quietly entered the final twelve months of a transition period that ends with full enforcement powers on August 2, 2026 -- a date most US SMBs do not yet have on a compliance calendar.
February: Plugins, Frontier Models, and a $30B Cheque
February was when the labs went from one-product companies to platform companies. Anthropic launched a plugin marketplace inside Cowork that bundled skills, connectors, slash commands, and sub-agents into installable packages. A skill is a folder of instructions and resources Claude loads dynamically; a plugin is the full toolkit -- the two together are the building blocks of the Anthropic agent ecosystem. Anthropic also closed its $30 billion Series G the same month.
OpenAI shipped GPT-5.3-Codex, a Codex-native frontier coding model with general reasoning, and put GPT-5.3-Codex-Spark into research preview for ChatGPT Pro -- a real-time coding model that generates fifteen times faster with a 128K context. Google countered with Gemini 3.1 Pro on February 19, then rolled Nano Banana 2 -- a faster image model built on Gemini 3.1 Flash Image -- into Search, Lens, and the Gemini app a week later.
Perplexity quietly raised a $500 million Series D from Sequoia at an $11 billion valuation, an announcement that mattered less for the dollars than for the signal: the AI search layer is now a venture-backed category, not a feature.
March: Microsoft Picks a Side, OpenAI Closes the Round
On March 9, Microsoft and Anthropic announced Copilot Cowork, bringing Claude's agentic capabilities directly into Microsoft 365 Copilot. The same runtime that powers Cowork now operates inside Outlook, Teams, Excel, and the rest of the M365 suite, breaking a complex request into steps and reasoning across emails, meetings, and files. For Microsoft customers -- and that is most SMBs -- this was the most important infrastructure announcement of the quarter. The line between "Copilot product" and "Claude product" got blurry, on purpose.
OpenAI followed on March 17 with GPT-5.4, plus mini and nano variants, introducing 1M-token context, native computer use, and tool search across the same model family. GPT-5.1 Instant, Thinking, and Pro were retired from ChatGPT on March 11 to clear the runway. On March 31, OpenAI formally closed its $122 billion round, its monthly revenue having crossed $2 billion (roughly $24 billion annualized).
April: Two Weeks That Closed the Gap
We wrote a separate piece on the two-week stretch in mid-April that brought OpenAI level with Anthropic, so the short version here. On April 8, Claude Managed Agents launched in public beta: a fully managed runtime with sandboxed code execution, checkpointing, credential management, scoped permissions, and end-to-end tracing. It removed the need to build your own agent loop.
On April 16, Claude Opus 4.7 shipped with a 1M-token context window at standard API pricing: 87.6% on SWE-bench Verified (vs. 80.8% on 4.6), 70% on CursorBench, vision resolution tripled, and -- critically -- no long-context premium on price. The next day Anthropic introduced Claude Design, a Labs product for slides, prototypes, one-pagers, and visual work, exporting to PDF, PowerPoint, Canva, or directly to Claude Code.
OpenAI answered on April 21 with ChatGPT Images 2.0, powered by gpt-image-2: the first image model with native reasoning, 2K resolution, multilingual text rendering, and coherent multi-image generation across a set. DALL-E 2 and DALL-E 3 are being retired on May 12, 2026. The same day, Google launched Deep Research and Deep Research Max on Gemini 3.1 Pro, autonomous research agents with MCP support and native chart and infographic generation.
And on April 28, Anthropic shipped 9 Claude Connectors for creative software -- Adobe Creative Cloud, Blender, Autodesk Fusion, Ableton, Splice, Affinity, SketchUp, and Resolume Arena and Wire -- all built on the Model Context Protocol, all available across every Claude plan, including Free. "Claude for CAD" arrived a year before most analysts thought it would.
The First Four Months of 2026
Seventeen launches that mattered, January through April.
The Regulatory Backdrop You Probably Have Not Calendared
The most overlooked story of the year so far has nothing to do with any product. According to the European Commission's AI Act timeline, the EU AI Office gains full enforcement powers on August 2, 2026: the ability to request information, order model recalls, mandate mitigations, and impose fines. General-purpose AI provider obligations have technically been in effect since August 2025; the next twelve weeks turn that text into actual regulatory teeth.
We covered the SMB implications in detail in our EU AI Act survival guide. The short version: if you have EU customers and you ship anything that touches an AI model, the August deadline applies to you. Fines run up to 7% of global revenue. Most US small businesses we speak with are not yet aware this clock is running.
What This All Adds Up To, In One Sentence
Between January 1 and May 1, the AI industry stopped selling chatbots and started selling runtimes. Cowork, Managed Agents, Workspace Agents, Copilot Cowork, Deep Research, the entire Connectors family -- these are not chat features. They are managed environments for software that runs over time, on your data, against your tools. The center of gravity moved from "answer my question" to "do my work."
For an SMB or mid-market team, that shift is the only thing in this recap that needs an action item attached to it. The model you use will keep getting better and cheaper. The funding rounds are not your problem. But the runtime layer is where competitive advantage will get built or lost over the next eighteen months -- and most companies do not yet have a single person inside whose job it is to think about it.
Three Things to Do Before June 1
- Pick an agent runtime and run a real pilot. Not a chat experiment. Pick one recurring, painful internal process -- a weekly report, a vendor onboarding, a customer email triage flow -- and put a Cowork project, a Managed Agent, or a Workspace Agent on it. Measure hours saved over 30 days. Without a real workflow attached, none of the products in this recap matter.
- Audit your EU exposure. Make a list of every product, model, vendor, and internal tool that touches AI. If any part of your customer base is in the EU, you have twelve weeks to map your usage to the GPAI obligations. The answer is usually simpler than it sounds -- but only after the audit, not before.
- Move at least one image workflow off DALL-E and onto gpt-image-2 or Claude Design. DALL-E retires May 12. If your marketing, product, or sales team has a recurring image process running on it, it breaks in days. The replacement options are better, cheaper, and more controllable -- but they need to be wired in before the cutover, not after.
What We Are Watching for the Rest of the Year
The next four months are likely to look very different from the first four. Frontier model releases are clearly going to keep accelerating, but the more interesting battles are now in the layer above the model: which agent runtime wins inside Microsoft, whether Google's Workspace play turns into a third real ecosystem, and whether Anthropic's Connectors strategy pulls Adobe, Autodesk, and the rest of the creative stack into Claude's orbit before OpenAI ships an answer.
The most underrated story going into summer: Claude Skills and Plugins, OpenAI Plugins, and MCP Connectors are converging on the same shape. A skill you build in one ecosystem will increasingly run in the other. The portable artifact -- the skill, the connector, the plugin -- is the durable thing your team should be building, not loyalty to a model.
OneWave AI is a certified partner of both OpenAI and Anthropic. We help SMBs and mid-market teams run pilots on the runtimes above, audit AI Act exposure, and move existing workflows onto the products that replaced their predecessors during the four months covered by this post. Get in touch or book a free call.



