Ministral 3B
Ministral 3B is Mistral AI's smallest production model, a 3B parameter edge-optimized architecture that benchmarks above the larger Mistral 7B at $0.1 per million input tokens.
import { streamText } from 'ai'
const result = streamText({ model: 'mistral/ministral-3b', prompt: 'Why is the sky blue?'})Frequently Asked Questions
How can Ministral 3B outperform the larger Mistral 7B?
Mistral AI credits architectural refinements in the Ministral generation.
What types of tasks is Ministral 3B well suited for?
Structured function calling, classification, simple summarization, and knowledge retrieval. Use Ministral 3B as a lightweight dispatch layer in agentic systems where the work is routing, not deep reasoning.
What context length does Ministral 3B support?
128K tokens.
What licensing options are available?
The Mistral AI Commercial License covers production use.