Skip to content
Vercel April 2026 security incident

Pixtral Large

mistral/pixtral-large

Pixtral Large is a 124B open-weights multimodal model built on Mistral AI Large 2, with 69.4% on MathVista plus DocVQA and ChartQA results Mistral AI published at release, and a context window of 128K tokens that fits at least 30 high-resolution images.

Tool UseVision (Image)
index.ts
import { streamText } from 'ai'
const result = streamText({
model: 'mistral/pixtral-large',
prompt: 'Why is the sky blue?'
})

What To Consider When Choosing a Provider

  • Zero Data Retention

    AI Gateway supports Zero Data Retention for this model via direct gateway requests (BYOK is not included). To configure this, check the documentation.

    Authentication

    AI Gateway authenticates requests using an API key or OIDC token. You do not need to manage provider credentials directly.

Pixtral Large pairs a 1B-parameter vision encoder with a Mistral AI Large 2 text decoder, so text and image paths both use large backbones. Pixtral Large scores close to Mistral AI Large 2 on text-only tasks when images are absent.

When to Use Pixtral Large

Best For

  • Mathematical reasoning over visual content:

    Charts, diagrams, and equations at 69.4% MathVista

  • Document understanding at scale:

    Layout, tables, and embedded images matter together

  • High-resolution chart analysis:

    Data visualization requiring accurate extraction

  • Multi-image workflows:

    Needing 30+ images in a single request context

  • High-quality vision and text:

    Applications where both capabilities must operate at high quality

Consider Alternatives When

  • Lighter-weight vision model:

    You need lower inference cost (consider Pixtral 12B)

  • Text-only workloads:

    A text-only Mistral AI model avoids the compute overhead of the vision stack when your pipeline has no images

  • Apache 2.0 licensing:

    You need this rather than Mistral AI Research License or commercial license

Conclusion

Pixtral Large reached 69.4% on MathVista and published DocVQA and ChartQA numbers at release, and Mistral AI reported a lead of roughly 50 ELO points on the LMSys Vision Leaderboard over prior open multimodal models. Text-only quality stays close to Mistral AI Large 2. Mistral AI has deprecated Pixtral Large, but it remains available through AI Gateway for existing integrations.

FAQ

69.4% on MathVista in Mistral AI's Pixtral Large announcement.

At least 30 high-resolution images within the context window of 128K tokens.

One billion parameters, 2.5x larger than Pixtral 12B's 400M encoder.

Yes. Text-only performance is comparable to Mistral AI Large 2. Adding vision capability doesn't degrade Pixtral Large's language understanding.

Mistral AI Research License (MRL) for research and education. A Mistral AI Commercial License is available for production deployments.

In Mistral AI's published evaluations at release, Pixtral Large's DocVQA and ChartQA scores were ahead of several proprietary multimodal models in the comparison set, including GPT-4o and Gemini-1.5 Pro. On the LMSys Vision Leaderboard, Pixtral Large led other open-weights models by approximately 50 ELO points.

Mistral AI has designated Pixtral Large as deprecated in favor of newer multimodal models. Pixtral Large remains accessible through AI Gateway for existing integrations.