Broadcom inks long-term deal to build Google's next-gen TPUs through 2031

Published
Score
4

Why it matters

Broadcom announced a long-term agreement with Google on April 6, 2026, to develop and supply custom Tensor Processing Units (TPUs) and networking components for Google's next-generation AI racks through 2031[1][2][3][5]. The deal positions Broadcom as Google's primary design partner for TPUs, which power advanced AI models like Gemini, and includes supply assurance for hardware connecting large-scale chip clusters[1][3]. Separately, Broadcom signed a tripartite arrangement providing Anthropic access to ~3.5 gigawatts of TPU-based computing capacity starting in 2027[1][2][3][4].

Key players are Broadcom (chip designer/supplier), Google (TPU developer and AI infrastructure builder), and Anthropic (AI startup gaining capacity); no specific individuals or agencies are named[1][2][4]. This builds on prior collaborations, amid surging demand for custom AI chips as alternatives to Nvidia's GPUs, with Google's TPUs driving cloud revenue growth[2]. Anthropic, with $30B+ run-rate revenue in 2026, uses diverse hardware including TPUs alongside Amazon Trainium and Nvidia GPUs[2][4].

The agreement ensures Broadcom's role in Google's AI ecosystem expansion, with investors reacting positively—shares rose 2.6-3% in after-hours trading[1][2][3]. It's newsworthy now due to intensifying AI infrastructure competition among hyperscalers like Google, Microsoft, and OpenAI, plus Anthropic's scaling needs amid booming Claude model demand[2][4]. The multi-year commitments signal stable revenue for custom ASICs, highlighting AI's massive power and silicon investments[1][3].

Sources

mail

Get notified about new Energy developments

Primary sources. No fluff. Straight to your inbox.

Related

View all Energy

See more entries tagged Energy.