menu
close

OpenAI Taps Google's TPUs in Strategic Shift from Nvidia Chips

OpenAI has begun renting Google's Tensor Processing Units (TPUs) to power ChatGPT and other products, marking the first significant use of non-Nvidia chips by the AI leader. This surprising collaboration between competitors comes as OpenAI seeks to diversify its computing infrastructure beyond Microsoft's data centers. The company hopes Google's cloud-based TPUs will help lower inference costs while maintaining performance for its rapidly growing AI services.
OpenAI Taps Google's TPUs in Strategic Shift from Nvidia Chips

In a significant shift for the AI industry, OpenAI has started using Google's Tensor Processing Units (TPUs) to power ChatGPT and its other AI products, according to sources familiar with the arrangement.

The move represents the first time OpenAI has meaningfully incorporated non-Nvidia chips into its infrastructure. Until now, the company has been one of the world's largest purchasers of Nvidia's graphics processing units (GPUs), which it uses for both training AI models and inference computing—the process where models make predictions based on new information.

This surprising partnership between two major AI competitors signals OpenAI's strategic effort to diversify its computing resources beyond Microsoft's Azure cloud platform. While Microsoft remains OpenAI's largest investor and primary infrastructure provider, their relationship has shown signs of strain in recent months as OpenAI seeks greater independence.

For Google, the deal marks a significant win as it expands the external availability of its in-house TPUs, which were historically reserved for internal use. Google's TPU chips are custom-designed for machine learning tasks and can offer performance advantages for certain AI workloads compared to general-purpose GPUs. The company has already secured other high-profile customers including Apple, Anthropic, and Safe Superintelligence—two AI companies founded by former OpenAI leaders.

OpenAI hopes that renting TPUs through Google Cloud will help lower the cost of inference computing, which has become increasingly important as ChatGPT's user base has grown to hundreds of millions. However, sources indicate that Google is not providing OpenAI access to its most powerful TPU chips, maintaining some competitive advantage.

This development comes amid OpenAI's broader infrastructure diversification strategy, which includes the $500 billion Stargate project with SoftBank and Oracle, and multi-billion dollar deals with CoreWeave for additional computing capacity. The company is also reportedly developing its first in-house chip to reduce dependency on external hardware providers.

As AI computing demands continue to escalate, with OpenAI's annual costs projected to reach billions of dollars, this partnership demonstrates how even fierce competitors in the AI sector are willing to collaborate to meet the massive computing requirements driving the industry forward.

Source:

Latest News