menu
close

AMD Challenges Nvidia with Helios AI Server and OpenAI Alliance

AMD has unveiled its next-generation Helios AI server system alongside a strategic partnership with OpenAI. The new rack-scale infrastructure, built around AMD's upcoming MI400 Series GPUs, aims to deliver 10x performance improvements for AI workloads when it launches in 2026. OpenAI CEO Sam Altman joined AMD CEO Lisa Su on stage to announce the collaboration, confirming OpenAI provided design input and will adopt AMD's latest chips to diversify its compute infrastructure.
AMD Challenges Nvidia with Helios AI Server and OpenAI Alliance

AMD is making a bold move to challenge Nvidia's dominance in the AI hardware market with the announcement of its new Helios AI server system and a significant partnership with OpenAI.

At its Advancing AI 2025 event in San Jose on June 12, AMD CEO Lisa Su unveiled the company's comprehensive vision for an open AI ecosystem, showcasing both current and future hardware innovations. The centerpiece was the preview of Helios, AMD's next-generation rack-scale AI infrastructure slated for release in 2026.

"For the first time, we architected every part of the rack as a unified system," Su explained during the presentation. "Think of Helios as really a rack that functions like a single, massive compute engine."

The Helios system will be powered by AMD's forthcoming Instinct MI400 Series GPUs, which are expected to deliver up to 10x performance improvement for inference on Mixture of Experts models compared to current generation accelerators. These chips will feature 432GB of HBM4 memory and 19.6TB/s of memory bandwidth, positioning them as direct competitors to Nvidia's upcoming Vera Rubin platform.

In a significant industry development, OpenAI CEO Sam Altman joined Su on stage to announce that the ChatGPT creator would adopt AMD's latest chips as part of its compute portfolio. "When you first started telling me about the specs, I was like, there's no way, that just sounds totally crazy," Altman remarked. "It's gonna be an amazing thing."

OpenAI has been providing feedback during AMD's GPU development process, with Altman confirming they're already running some workloads on the MI300X and are "extremely excited" about the MI400 Series. This partnership marks a notable shift for OpenAI, which has primarily relied on Nvidia GPUs purchased through providers like Microsoft, Oracle, and CoreWeave.

AMD also launched its Instinct MI350 Series GPUs at the event, which will be available in Q3 2025. These accelerators deliver a 4x generation-on-generation AI compute increase and up to 35x improvement in inferencing performance. The MI350 Series features 288GB of HBM3E memory with 8TB/s bandwidth and supports new FP4 and FP6 data types optimized for AI workloads.

Unlike Nvidia's proprietary approach, AMD is emphasizing open standards and interoperability. The company is using an open-source networking technology called UALink for its rack systems, contrasting with Nvidia's proprietary NVLink. "The future of AI is not going to be built by any one company or in a closed ecosystem. It's going to be shaped by open collaboration across the industry," Su stated.

Despite the ambitious announcements, AMD shares ended 2.2% lower following the event, indicating that Wall Street remains skeptical about the company's ability to significantly disrupt Nvidia's estimated 90% market share in the near term. However, with the AI chip market projected to exceed $500 billion by 2028, AMD is positioning itself for long-term growth in this rapidly expanding sector.

Source:

Latest News