o4-mini was released on April 16, 2025 alongside o3 as a cost-efficient reasoning model from OpenAI. It advances the compact reasoning model line (following o1-mini and o3-mini) with improvements across reasoning quality, efficiency, and multimodal capability.
A key advancement is native vision support: o4-mini can reason over images, diagrams, mathematical notation, and screenshots, combining visual understanding with chain-of-thought analysis. Earlier mini reasoning models were text-only. This opens up visual reasoning tasks at the affordable mini-tier pricing.
The model supports function calling and tool use, making it suitable as the reasoning layer in lightweight agent architectures. Combined with the reasoning_effort parameter, it lets you build cost-optimized pipelines that apply just enough reasoning to each request.