menu
close

OpenAI Diversifies Chip Strategy, Clarifies Google TPU Testing

OpenAI has confirmed it's testing Google's tensor processing units (TPUs) but has no immediate plans for large-scale deployment. The AI leader is actively using Nvidia GPUs and AMD chips to meet growing computational demands, while simultaneously developing its own custom silicon. This strategic diversification highlights the critical importance of chip infrastructure in the competitive AI landscape.
OpenAI Diversifies Chip Strategy, Clarifies Google TPU Testing

OpenAI has clarified its position regarding Google's AI chips, stating that while early testing of Google's tensor processing units (TPUs) is underway, there are no immediate plans to deploy them at scale.

On June 30, an OpenAI spokesperson addressed reports suggesting the company would be using Google's in-house chips to power its products. The clarification came after earlier news indicated OpenAI had begun renting Google's TPUs through Google Cloud to potentially lower inference computing costs, which reportedly consume over 50% of OpenAI's compute budget.

The AI leader is pursuing a multi-faceted chip strategy to manage its exponentially growing computational needs. Currently, OpenAI relies heavily on Nvidia's GPUs, which dominate approximately 80% of the AI chip market. However, the company has been actively incorporating AMD's AI chips into its infrastructure. In a significant development, OpenAI CEO Sam Altman appeared alongside AMD CEO Lisa Su at AMD's Advancing AI event in June 2025, confirming OpenAI's adoption of AMD's MI300X chips and engagement with the upcoming MI400 series platforms.

Simultaneously, OpenAI is making substantial progress on developing its own custom AI chip. The company's in-house team, led by former Google TPU head Richard Ho, has grown to approximately 40 engineers working in collaboration with Broadcom. This custom silicon initiative is on track to reach the critical "tape-out" milestone this year, with mass production targeted for 2026 at Taiwan Semiconductor Manufacturing Company (TSMC).

OpenAI's diversified approach to chip infrastructure reflects the strategic importance of hardware in the AI race. As training and running increasingly sophisticated AI models requires enormous computational resources, companies are seeking to optimize performance while managing costs and supply chain dependencies. With infrastructure partners including Microsoft, Oracle, and CoreWeave, OpenAI is positioning itself to maintain technological leadership while addressing the financial challenges of scaling AI systems.

Source: Reuters

Latest News