Back to List
Anthropic Expands Partnership With Google and Broadcom for Multiple Gigawatts of Next-Generation Compute Capacity
Industry NewsAnthropicGoogle CloudBroadcom

Anthropic Expands Partnership With Google and Broadcom for Multiple Gigawatts of Next-Generation Compute Capacity

Anthropic has announced a major expansion of its infrastructure through a new agreement with Google and Broadcom, securing multiple gigawatts of next-generation TPU capacity expected to go live starting in 2027. This move aims to support the development of frontier Claude models and meet surging global demand. Anthropic's financial growth has been remarkable, with run-rate revenue jumping from $9 billion at the end of 2025 to over $30 billion in early 2026. The company also reported a doubling of high-value business customers spending over $1 million annually. Most of this new compute will be based in the United States, reinforcing a $50 billion investment commitment to American infrastructure. While deepening ties with Google and Broadcom, Anthropic maintains a multi-platform strategy involving AWS Trainium and NVIDIA GPUs.

Hacker News

Key Takeaways

  • Massive Infrastructure Expansion: Anthropic secured multiple gigawatts of next-generation TPU capacity through Google and Broadcom, slated for 2027.
  • Explosive Revenue Growth: The company's run-rate revenue surged to over $30 billion, more than tripling from $9 billion at the end of 2025.
  • Customer Base Doubling: Business customers spending over $1 million annually grew from 500 to over 1,000 in less than two months.
  • U.S.-Centric Investment: The majority of the new compute will be located in the United States, supporting a $50 billion commitment to domestic infrastructure.
  • Multi-Hardware Strategy: Anthropic continues to utilize a diverse hardware stack including Google TPUs, AWS Trainium, and NVIDIA GPUs for resilience and performance.

In-Depth Analysis

Scaling for Frontier AI Development

Anthropic's latest agreement with Google and Broadcom represents its most significant compute commitment to date. By securing multiple gigawatts of next-generation TPU capacity, the company is positioning itself to lead the next wave of AI development. CFO Krishna Rao emphasized that this disciplined approach to scaling is necessary to keep pace with the exponential growth of their customer base. The new capacity, expected to come online in 2027, will specifically power the frontier Claude models, ensuring that the company can meet the computational demands required for increasingly complex AI training and inference.

Unprecedented Financial and Market Momentum

The scale of this infrastructure investment is backed by extraordinary financial performance in early 2026. Anthropic revealed that its run-rate revenue has surpassed $30 billion, a massive leap from the $9 billion reported at the end of 2025. This growth is driven by a rapidly expanding enterprise sector; the number of business customers spending at least $1 million on an annualized basis has doubled from 500 in February 2026 to over 1,000 today. This rapid adoption underscores the critical role Claude is playing in high-value business environments.

Strategic Hardware Diversity and Domestic Focus

A core component of Anthropic's strategy is its hardware-agnostic approach. By training and running Claude on a mix of AWS Trainium, Google TPUs, and NVIDIA GPUs, the company can optimize specific workloads for the most suitable hardware. This diversity not only improves performance but also provides operational resilience. Furthermore, the decision to site the majority of this new compute in the United States aligns with Anthropic's November 2025 pledge to invest $50 billion in American computing infrastructure, strengthening the domestic AI ecosystem.

Industry Impact

This partnership signals a shift toward massive-scale infrastructure commitments in the AI industry. By securing gigawatts of power and compute, Anthropic is setting a high bar for what is required to compete at the frontier of AI. The collaboration with Broadcom and Google highlights the growing importance of custom silicon (TPUs) in reducing reliance on a single hardware provider. Additionally, the rapid revenue growth reported by Anthropic suggests that the enterprise market for AI is maturing faster than previously anticipated, with significant capital being deployed for high-end AI services.

Frequently Asked Questions

Question: When will the new compute capacity from Google and Broadcom become available?

Anthropic expects the next-generation TPU capacity to begin coming online starting in 2027.

Question: How much has Anthropic's revenue grown recently?

Anthropic's run-rate revenue has surpassed $30 billion as of April 2026, up from approximately $9 billion at the end of 2025.

Question: What hardware does Anthropic use to train Claude?

Anthropic utilizes a diverse range of AI hardware, including Google TPUs, AWS Trainium, and NVIDIA GPUs, to ensure performance and resilience.

Related News

Industry News

Solving the MCP Onboarding Friction: How a Simple 'Hello Page' Reduced Support Tickets for HybridLogic

Luke Lanchester of HybridLogic has identified a critical friction point in the adoption of the Model Context Protocol (MCP): the disconnect between developer-centric specifications and real-world user behavior. When HybridLogic launched an MCP server for their primary tool, they were met with a surge of support tickets from users who mistakenly believed the service was broken after encountering 401 errors or raw JSON in their browsers. To resolve this without the unsustainable task of building individual plugins for every emerging LLM client, Lanchester implemented a 'hacky' but effective solution. By serving a user-friendly HTML 'Hello Page' specifically to browser-based requests, the company successfully guided users on how to properly integrate the server into their AI clients, leading to a dramatic drop in support requests and a smoother onboarding experience.

Experimenting with Claude AI for Open-Source Bounties: A Case Study on Automated Coding Agents
Industry News

Experimenting with Claude AI for Open-Source Bounties: A Case Study on Automated Coding Agents

This article examines a real-world experiment where a developer attempted to use Claude, an AI coding agent, to earn money through open-source bounties on the Algora platform. Inspired by a viral success story of an AI agent earning $16.88, the author set out to replicate the results with a $20 token budget. The experiment involved analyzing 60 fresh GitHub issues and utilizing a suite of tools including the GitHub CLI and automated editing capabilities. Despite the structured approach and human-in-the-loop safety checks, the project resulted in $0 earnings after 48 hours. The findings highlight significant practical challenges in the bounty ecosystem, such as reserved issues for hiring and high competition, suggesting that the path to profitable autonomous AI coding is more complex than initial successes might indicate.

The Haves and Have Nots of the AI Gold Rush: Examining the Tech Industry's Shifting Sentiment
Industry News

The Haves and Have Nots of the AI Gold Rush: Examining the Tech Industry's Shifting Sentiment

This analysis explores the current atmosphere surrounding the artificial intelligence boom, focusing on the emerging divide within the technology sector. Despite the significant momentum of the AI 'gold rush,' internal sentiment is reportedly shifting, with industry 'vibes' turning negative. The report highlights a growing disparity between the 'haves'—those positioned to benefit from the current surge—and the 'have nots' who may be left behind. This internal skepticism suggests that even within the heart of the tech industry, the rapid expansion of AI is being met with unease rather than universal optimism. The following analysis breaks down the implications of these negative industry vibes and the structural inequality inherent in the current technological landscape as described in recent industry observations.