Pixtral Large
Pixtral Large is a 124B open-weights multimodal model built on Mistral AI Large 2, with 69.4% on MathVista plus DocVQA and ChartQA results Mistral AI published at release, and a context window of 128K tokens that fits at least 30 high-resolution images.
import { streamText } from 'ai'
const result = streamText({ model: 'mistral/pixtral-large', prompt: 'Why is the sky blue?'})Playground
Try out Pixtral Large by Mistral AI. Usage is billed to your team at API rates. Free users (those who haven't made a payment) get $5 of credits every 30 days.
Providers
Route requests across multiple providers. Copy a provider slug to set your preference. Visit the docs for more info. Using a provider means you agree to their terms, listed under Legal.
| Provider |
|---|
P50 throughput on live AI Gateway traffic, in tokens per second (TPS). Visit the docs for more info.
P50 time to first token (TTFT) on live AI Gateway traffic, in milliseconds. View the docs for more info.
Direct request success rate on AI Gateway and per-provider. Visit the docs for more info.
More models by Mistral AI
| Model |
|---|
About Pixtral Large
Released November 1, 2024, Pixtral Large is a 124B open-weights multimodal model built on Mistral AI Large 2. Pixtral Large's vision encoder carries one billion parameters, 2.5x larger than Pixtral 12B's encoder. The context window of 128K tokens accommodates at least 30 high-resolution images per request.
Pixtral Large scores 69.4% on MathVista. In Mistral AI's published evaluations at release, Pixtral Large's DocVQA and ChartQA scores were ahead of several proprietary multimodal models in the comparison set, including GPT-4o and Gemini-1.5 Pro. On the LMSys Vision Leaderboard, Pixtral Large led other open-weights models by approximately 50 ELO points. These results combine Mistral AI Large 2's text reasoning with the larger vision encoder's richer image representations.
Text-only performance stays comparable to Mistral AI Large 2, so Pixtral Large doesn't require a capability tradeoff when images are absent. Pixtral Large is available under the Mistral AI Research License for research and education, with a Mistral AI Commercial License for production use. Mistral AI has designated Pixtral Large as deprecated in favor of newer models.
What To Consider When Choosing a Provider
- Configuration: Pixtral Large pairs a 1B-parameter vision encoder with a Mistral AI Large 2 text decoder, so text and image paths both use large backbones. Pixtral Large scores close to Mistral AI Large 2 on text-only tasks when images are absent.
- Zero Data Retention: AI Gateway supports Zero Data Retention for this model via direct gateway requests (BYOK is not included). To configure this, check the documentation.
- Authentication: AI Gateway authenticates requests using an API key or OIDC token. You do not need to manage provider credentials directly.
When to Use Pixtral Large
Best For
- Mathematical reasoning over visual content: Charts, diagrams, and equations at 69.4% MathVista
- Document understanding at scale: Layout, tables, and embedded images matter together
- High-resolution chart analysis: Data visualization requiring accurate extraction
- Multi-image workflows: Needing 30+ images in a single request context
- High-quality vision and text: Applications where both capabilities must operate at high quality
Consider Alternatives When
- Lighter-weight vision model: You need lower inference cost (consider Pixtral 12B)
- Text-only workloads: A text-only Mistral AI model avoids the compute overhead of the vision stack when your pipeline has no images
- Apache 2.0 licensing: You need this rather than Mistral AI Research License or commercial license
Conclusion
Pixtral Large reached 69.4% on MathVista and published DocVQA and ChartQA numbers at release, and Mistral AI reported a lead of roughly 50 ELO points on the LMSys Vision Leaderboard over prior open multimodal models. Text-only quality stays close to Mistral AI Large 2. Mistral AI has deprecated Pixtral Large, but it remains available through AI Gateway for existing integrations.
Frequently Asked Questions
What is Pixtral Large's MathVista score?
69.4% on MathVista in Mistral AI's Pixtral Large announcement.
How many images can Pixtral Large process per request?
At least 30 high-resolution images within the context window of 128K tokens.
What is Pixtral Large's vision encoder size?
One billion parameters, 2.5x larger than Pixtral 12B's 400M encoder.
Does Pixtral Large maintain text-only quality?
Yes. Text-only performance is comparable to Mistral AI Large 2. Adding vision capability doesn't degrade Pixtral Large's language understanding.
What license covers Pixtral Large?
Mistral AI Research License (MRL) for research and education. A Mistral AI Commercial License is available for production deployments.
How does Pixtral Large perform on document understanding?
In Mistral AI's published evaluations at release, Pixtral Large's DocVQA and ChartQA scores were ahead of several proprietary multimodal models in the comparison set, including GPT-4o and Gemini-1.5 Pro. On the LMSys Vision Leaderboard, Pixtral Large led other open-weights models by approximately 50 ELO points.
Is Pixtral Large still actively maintained?
Mistral AI has designated Pixtral Large as deprecated in favor of newer multimodal models. Pixtral Large remains accessible through AI Gateway for existing integrations.