OpenAI has officially released o3-mini, a significant advancement in its reasoning model lineup that balances powerful capabilities with improved efficiency and affordability.
The new model, which was first previewed in December 2024, is specifically optimized for technical domains requiring precision and speed. According to OpenAI's benchmarks, o3-mini delivers 39% fewer major mistakes on complex real-world questions compared to its predecessor o1-mini, while providing responses approximately 24% faster.
What sets o3-mini apart is its adaptive reasoning capability. Users can select from three reasoning effort levels—low, medium, and high—allowing the model to adjust its 'thinking' process based on the complexity of tasks. This feature enables developers to fine-tune the balance between accuracy and response time according to their specific needs.
The model particularly excels in STEM applications, demonstrating impressive performance on benchmarks for programming, mathematics, and scientific reasoning. At high reasoning effort, o3-mini even outperforms the full o1 model on several tests, including the American Invitational Mathematics Examination (AIME) and software engineering benchmarks.
Pricing reflects OpenAI's focus on accessibility, with o3-mini costing 63% less than o1-mini at $0.55 per million cached input tokens and $4.40 per million output tokens. This competitive pricing positions the model as an attractive option for businesses seeking advanced AI capabilities without prohibitive costs.
Enterprise adoption is expected to accelerate as o3-mini becomes available across OpenAI's product lineup. The model is accessible to all ChatGPT users, with premium subscribers receiving higher usage limits. ChatGPT Enterprise and ChatGPT Edu customers also gain access, while developers can integrate o3-mini through OpenAI's API.
As OpenAI continues to expand its model offerings with the recent addition of o3 and o4-mini in April 2025, o3-mini represents an important milestone in the company's mission to make advanced AI reasoning more accessible while maintaining the balance between capability, cost, and efficiency.