Just a day later, OpenAI made another surprising move:
In one go, o3 and o4 mini were simultaneously launched.
Still the most popular reasoning model, and this time, they can finally call various tools in ChatGPT, including web search, Python, image analysis, file interpretation, and image generation.
In other words, you can now use o3 to generate a Ghibli-style Ultraman hugging a child image (Doge).
Not only can they understand and generate images, but the official mentioned that o3 and o4-mini are OpenAI's first models that can integrate uploaded images into the chain of thought—
This means they can think based on images, be like:
OpenAI states that o3 is their most powerful reasoning model to date, refreshing SOTA in multiple dimensions such as programming, mathematics, science, and visual perception, and is particularly outstanding in visual tasks like analyzing images, charts, and graphics.
In external expert assessments, o3 makes 20% fewer critical errors in difficult real-world tasks compared to o1.
Meanwhile, o4-mini is a small model optimized for fast and cost-effective reasoning.
In expert evaluations, o4-mini outperforms the previous o3-mini in non-STEM tasks and data science fields.
It even shows performance exceeding o3 in AIME 2024 and AIME 2025.
Starting immediately, ChatGPT Plus, Pro members, and Team users can directly experience o3, o4-mini, and o4-mini-high, while the original o1, o3-mini, and o3-mini-high have been quietly removed.
This article is from the WeChat official account "Quantum Bit", authorized by 36Kr to publish.





