Anthropic

Agentic Economy
View Market Map
Layer 1: Agentsas ClaudeLayer 2: Creation & Orchestrationas Claude CodeLayer 4: Foundation Models & Intelligenceas Claude
"The AI future could be astonishingly good."
— Dario Amodei, CEO of Anthropic, "Machines of Loving Grace" (October 2024)

Anthropic is an AI safety company and the creator of Claude, one of the most capable large language models. Founded in 2021 by Dario Amodei, Daniela Amodei, and other former OpenAI researchers, Anthropic builds powerful AI systems while investing deeply in alignment, interpretability, and responsible deployment. In the agentic economy, Anthropic's strategy is depth over breadth: three layers with remarkable focus, betting that model quality and developer ecosystem can win without vertical integration.

Claude and Constitutional AI

Claude is distinguished by its Constitutional AI (CAI) training approach — where AI behavior is guided by a set of principles rather than purely by human feedback. Claude's strengths in long-context understanding, nuanced analysis, and code generation have made it a leading choice for enterprise AI deployment and agentic AI applications. Claude is one of the two leading frontier agents at Layer 1 of the agentic economy.

Explosive Enterprise Growth

Demand from Claude customers has accelerated dramatically in 2026. Anthropic's run-rate revenue surpassed $30 billion by April 2026 — up from approximately $9 billion at the end of 2025, representing more than a 3x increase in under four months. Over 1,000 business customers now each spend more than $1 million annually on Claude, a figure that doubled from 500+ in less than two months. The growth curve is steeper than exponential, and Anthropic's enterprise-first strategy — with its higher margins and more predictable revenue — underpins CEO Dario Amodei's confidence that the company can scale aggressively while managing financial risk more conservatively than competitors he characterizes as taking a "YOLO" approach to capital expenditure.

Model Context Protocol (MCP)

Anthropic developed and open-sourced the Model Context Protocol (MCP), enabling AI models to connect to external tools, data sources, and services. MCP has rapidly become foundational infrastructure for the agentic web, with broad ecosystem adoption across competing AI providers. With over 17,000 MCP servers now available, MCP is winning the protocol adoption race and following Reed's Law dynamics where network value grows exponentially with subgroup formation.

Claude Code and Agentic Development

Claude Code and the Claude Agent SDK represent sophisticated agentic systems capable of autonomous multi-step reasoning, tool use, and self-improving software workflows. With 4% of GitHub commits now authored by Claude Code — and on track to potentially reach 20%+ — the self-improving software loop, where AI agents debug and enhance the tools they depend on, represents a paradigm shift that Anthropic's tools are central to.

The Google-Anthropic Compute Alliance

Anthropic does not own its own compute infrastructure or silicon — it relies on cloud partners, primarily Amazon and Google. But this relationship has deepened into a full strategic alliance. In April 2026, Broadcom announced it will supply Google with custom TPUs and networking through 2031, with Anthropic securing access to approximately 3.5 gigawatts of TPU-based computing capacity beginning in 2027. This is a deliberate bet that the protocol and developer layers matter more than infrastructure ownership — while ensuring that compute constraints don't cap growth. The alliance benefits both sides: Google captures AI growth on Google Cloud even when Anthropic wins enterprise deals over Google's own Gemini offerings, and Anthropic gains access to the world's largest pool of AI compute. As Ben Thompson observed, a world where Anthropic wins enterprise over OpenAI both weakens Google's primary competitor in the consumer AI space and gives Google the revenue certainty to continue outspending OpenAI on infrastructure.

Safety-First Scaling

Anthropic's Responsible Scaling Policy establishes capability thresholds that trigger increased safety measures as models grow more powerful. Combined with investment in mechanistic interpretability, this reflects a philosophy that consequential AI companies must internalize safety as a core engineering discipline. This conservative ethos extends to Anthropic's capital strategy: rather than making speculative trillion-dollar infrastructure bets, Amodei has described the company's approach as carefully balancing upside capture against bankruptcy risk — buying enough compute to support strong growth scenarios while accepting that it may not be able to serve the absolute maximum demand if growth exceeds even optimistic projections.