Magistral Medium 2509
Magistral Medium 2509 is Mistral AI's reasoning model for enterprise use, with 73.6% on AIME 2024 (single-sample), traceable chain-of-thought, and multilingual reasoning across eight languages.
import { streamText } from 'ai'
const result = streamText({ model: 'mistral/magistral-medium', prompt: 'Why is the sky blue?'})What To Consider When Choosing a Provider
Zero Data Retention
AI Gateway supports Zero Data Retention for this model via direct gateway requests (BYOK is not included). To configure this, check the documentation.Authentication
AI Gateway authenticates requests using an API key or OIDC token. You do not need to manage provider credentials directly.
Magistral Medium 2509's transparent chain-of-thought output makes it particularly useful in regulated industries where auditability of AI reasoning steps is a compliance requirement.
When to Use Magistral Medium 2509
Best For
Mathematical reasoning:
Quantitative problem solving requiring multi-step derivation
Regulated industry applications:
Finance, legal, and healthcare where reasoning auditability matters
Business strategy analysis:
Scenario modeling that benefits from structured thinking
Multilingual reasoning workflows:
Spanning European and Arabic-script languages
Majority voting reliability:
Applications where multiple reasoning passes improve answer reliability
Consider Alternatives When
Cost-sensitive reasoning:
Magistral Small's 70.7% AIME score meets your accuracy threshold
Straightforward instruction following:
Tasks that do not require extended reasoning
Agentic code execution:
You need code execution rather than reasoning (consider Devstral models)
Conclusion
Choose Magistral Medium 2509 when problem complexity justifies enterprise-tier capability. Magistral Medium 2509 delivers mathematical reasoning, transparent audit trails, and multilingual coverage across eight languages. Magistral Medium 2509 established Mistral AI's position in the reasoning model category.
FAQ
73.6% single-sample, and 90% with majority voting at 64 samples.
Magistral Medium 2509 exposes each reasoning step rather than returning only a final answer. Users and auditors can follow the inferential process from question to conclusion.
English, French, Spanish, German, Italian, Arabic, Russian, and Simplified Chinese.
Magistral Medium 2509 is the larger variant. Magistral Small is a 24B open-source model under Apache 2.0 that scores 70.7% on AIME 2024. Choose Magistral Small when you prioritize lower cost over the extra 2.9 points on AIME 2024 single-sample.
Yes. Software engineering is one of its listed domain expertise areas alongside business strategy and regulated industries.
Mistral AI hasn't publicly disclosed configurable thinking budgets for Magistral Medium 2509.