As run-rate revenue skyrockets past $30 billion, the AI heavyweight doubles down on American infrastructure and a diversified hardware strategy.
- A Gigawatt-Scale Future: Anthropic has secured a massive new agreement with Google and Broadcom for multiple gigawatts of next-generation TPU compute capacity, slated to come online starting in 2027.
- Unprecedented Financial Explosion: The company’s financial growth is staggering, hitting a $30 billion revenue run-rate in 2026, with over 1,000 enterprise customers now spending more than $1 million annually.
- Diverse Hardware Strategy: Despite the new Google ties, Anthropic maintains a robust, multi-cloud approach, utilizing hardware from AWS, Google, and NVIDIA, while keeping Amazon as its primary cloud partner.
As a reporter covering the relentless pace of the artificial intelligence sector, I’ve seen my fair share of staggering numbers. Yet, the latest announcement from Anthropic feels like a tectonic shift in the AI landscape. The company behind the formidable Claude AI models has just unveiled a monumental agreement with Google and Broadcom, securing multiple gigawatts of next-generation TPU capacity. Expected to light up starting in 2027, this expansion isn’t just about plugging in more servers—it is a bold declaration of intent in the global AI arms race.
This infrastructure push is a direct response to an unprecedented financial explosion. According to Anthropic, demand from Claude customers has aggressively accelerated throughout 2026. The numbers are frankly astonishing: the company’s run-rate revenue has eclipsed a jaw-dropping $30 billion. To put that in perspective, they were sitting at approximately $9 billion at the end of 2025. The enterprise adoption rate is equally dizzying. During their Series G fundraising in February, Anthropic noted that over 500 business customers were spending upward of $1 million on an annualized basis. Today, less than two months later, that figure has doubled to exceed 1,000 customers.
“This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development,” stated Krishna Rao, CFO of Anthropic. Rao didn’t mince words regarding the scale of the deal, calling it the company’s “most significant compute commitment to date.”
From a broader industry perspective, this move signals a massive doubling down on domestic infrastructure. Anthropic confirmed that the vast majority of this new, multi-gigawatt compute capacity will be sited right here in the United States. This deeply anchors their previous November 2025 pledge to invest $50 billion into fortifying American computing infrastructure. It also builds heavily upon the increased TPU capacity with Google Cloud they announced last October, further solidifying their deep technical relationship with both Google and Broadcom.
What makes Anthropic’s strategy truly fascinating—and highly appealing to cautious enterprise clients—is its hardware and cloud agility. While this new Google megadeal is making headlines, Anthropic isn’t putting all its silicon in one basket. They continue to train and run Claude across a diverse array of hardware, including AWS Trainium, Google TPUs, and NVIDIA GPUs. This flexibility allows them to smartly match specific AI workloads to the chips best suited for the task, yielding better performance and crucial resilience for customers who rely on Claude for mission-critical operations.
Furthermore, their historic allies aren’t being sidelined. Anthropic was quick to clarify that Amazon remains its primary cloud provider and training partner, with the two companies continuing their close, secretive collaboration on “Project Rainier.”
Today, Claude holds a unique distinction in the market: it remains the only frontier AI model available to customers across all three of the world’s largest cloud platforms. Whether a business is operating on Amazon Web Services via Bedrock, Google Cloud via Vertex AI, or Microsoft Azure via Foundry, Claude is there. As we look toward 2027, the battle lines in the AI industry are being drawn not just in code, but in copper, silicon, and gigawatts. Anthropic’s latest maneuver proves that in the race to build the ultimate AI, raw compute power is the most valuable currency on earth—and they are spending it at a historic pace.


