Minimax M2.7
Minimax M2.7 is MiniMax's high-capability agentic model targeting end-to-end software engineering: project delivery, log analysis, bug troubleshooting, and code security. It supports a context window of 204.8K tokens and a max output of 131.1K tokens per request.
import { streamText } from 'ai'
const result = streamText({ model: 'minimax/minimax-m2.7', prompt: 'Why is the sky blue?'})Frequently Asked Questions
What is native multi-agent collaboration in Minimax M2.7?
Minimax M2.7 operates within multi-agent networks, handling context passing, handoffs, and dependency tracking between agents without custom orchestration middleware.
What is dynamic tool search?
Instead of using a fixed tool list, Minimax M2.7 discovers and invokes relevant tools at runtime based on the task at hand. This expands its adaptability during long-horizon workflows.
How does Minimax M2.7 differ from M2.5?
M2.5 introduced planning-before-coding for single-agent workflows. Minimax M2.7 adds multi-agent coordination, complex skill orchestration, and dynamic tool search for production-grade distributed agent systems.
Does Minimax M2.7 support professional office tasks beyond coding?
Yes. It performs well on professional office tasks including document processing and data workflows alongside software engineering.
Is there a speed-optimized variant of Minimax M2.7?
Yes.
minimax/minimax-m2.7-highspeedis the throughput-optimized variant at roughly double the listed input and output rates versus the standard variant.How do I access Minimax M2.7 through the AI SDK?
Set the model identifier to
minimax/minimax-m2.7in your SDK configuration.Can Minimax M2.7 handle production debugging across multiple services?
Yes. Production debugging and end-to-end project delivery are key improvements in the Minimax M2.7 generation.