menu
close

OpenAI's AI Achieves Gold Medal in Math Olympiad Challenge

OpenAI has announced that its experimental reasoning language model has achieved gold medal-level performance on the 2025 International Mathematical Olympiad (IMO), solving 5 out of 6 problems under the same conditions as human contestants. This breakthrough represents a significant advancement in AI reasoning capabilities, demonstrating sustained creative thinking previously considered uniquely human. The achievement comes as OpenAI prepares to launch GPT-5, which will unify specialized models including advanced reasoning capabilities.
OpenAI's AI Achieves Gold Medal in Math Olympiad Challenge

In a significant milestone for artificial intelligence, OpenAI has announced that its latest experimental reasoning model has achieved gold medal-level performance at the 2025 International Mathematical Olympiad (IMO), widely regarded as the world's most prestigious mathematics competition.

The model successfully solved five out of six problems from the 2025 IMO, earning 35 out of 42 possible points—equivalent to a gold medal performance. What makes this achievement particularly remarkable is that the AI operated under the same strict conditions as human contestants: two 4.5-hour exam sessions with no access to tools, internet, or external assistance.

"This represents a new level of sustained creative thinking compared to past benchmarks," said Alexander Wei, an OpenAI researcher who announced the breakthrough. Wei noted that the reasoning time horizon has progressed from simple math problems that take top humans about 0.1 minutes to solve, to IMO problems that require approximately 100 minutes of concentrated effort.

Unlike previous AI systems designed specifically for mathematical competitions, OpenAI's model is a general-purpose reasoning language model that incorporates new experimental techniques in reinforcement learning and test-time compute scaling. Three former IMO medalists independently graded the model's submitted proofs, with scores finalized after unanimous consensus.

This achievement is particularly notable when compared to other leading AI models. In a recent evaluation by MathArena.ai, competitors including Gemini 2.5 Pro, Grok-4, and OpenAI's earlier o3 model all failed to reach even the bronze medal threshold on the same problems.

The timing of this breakthrough aligns with OpenAI's upcoming release of GPT-5, expected in the coming months. According to multiple sources, GPT-5 will unify OpenAI's various specialized models—including the reasoning capabilities demonstrated in this IMO achievement—into a single system with a smart router that automatically selects the most appropriate approach for each task.

"The IMO gold LLM is an experimental research model. We don't plan to release anything with this level of math capability for several months," Wei clarified, suggesting that these advanced reasoning capabilities may be incorporated into future public releases.

Source: Analyticsindiamag

Latest News