MiniMax M2.5
MiniMax M2.5 is a third-generation agentic model from MiniMax that handles full-stack development across Web, Android, iOS, Windows, and Mac platforms. It supports a context window of 1M tokens, a max output of 196K tokens, and completes tasks about 37% faster than M2.1.
import { streamText } from 'ai'
const result = streamText({ model: 'minimax/minimax-m2.5', prompt: 'Why is the sky blue?'})Frequently Asked Questions
What does "native spec behavior" mean in MiniMax M2.5?
MiniMax M2.5 automatically produces a structured breakdown of functions, data structures, and UI components before writing code. This specification phase reduces implementation errors and improves coherence across multi-file outputs.
How does MiniMax M2.5 handle unfamiliar codebases?
It adapts more effectively than M2.1 and solves problems with fewer search rounds. This makes it better at navigating repositories it hasn't seen before.
What platforms does MiniMax M2.5 support for full-stack development?
Web, Android, iOS, Windows, and Mac. The model covers the full development lifecycle across all five platforms.
How does MiniMax M2.5 compare to M2.1 on speed?
MiniMax M2.5 completes tasks about 37% faster than M2.1 through optimized token efficiency in its reasoning process.
What are MiniMax M2.5's SWE-Bench scores?
MiniMax M2.5 scores 80.2% on SWE-Bench Verified and 51.3% on Multi-SWE-Bench.
Is there a faster variant of MiniMax M2.5?
Yes. Select
minimax/minimax-m2.5-highspeedwhere your provider exposes it. It targets high tokens-per-second for latency-sensitive applications.Can MiniMax M2.5 be used in multi-agent pipelines?
Yes. Its native spec behavior and planning capabilities make it well-suited as a planner or orchestrator in multi-agent systems.