Mistral Small
Mistral Small (September 1, 2024) is a 22B mid-tier model with improved reasoning, alignment, and code at about 80% lower list pricing than its predecessor in Mistral AI's September 2024 table.
import { streamText } from 'ai'
const result = streamText({ model: 'mistral/mistral-small', prompt: 'Why is the sky blue?'})Frequently Asked Questions
What are the input and output prices for Mistral Small?
This page lists the current rates. Multiple providers can serve Mistral Small, so AI Gateway surfaces live pricing rather than a single fixed figure.
How many parameters does Mistral Small have?
22 billion parameters.
Where does Mistral Small sit in the model lineup?
Between Mistral AI NeMo (12B) and Mistral AI Large 2, described as a convenient mid-point for enterprise use cases.
What capability improvements came with the September 1, 2024 release?
Improved human alignment, stronger reasoning, and better code generation compared to the prior Mistral Small version.
What tasks is Mistral Small well-suited for?
Translation, summarization, and sentiment analysis. Mistral AI positions Mistral Small for tasks that don't need Mistral AI Large-scale breadth.
How does Mistral Small compare to Mistral AI NeMo?
Mistral Small is larger (22B vs 12B), more capable, and more expensive. Mistral AI NeMo has the Tekken tokenizer's compression advantages for code and non-Latin scripts. The choice depends on capability requirements and token efficiency needs.