Pixtral Large
Pixtral Large is a 124B open-weights multimodal model built on Mistral AI Large 2, with 69.4% on MathVista plus DocVQA and ChartQA results Mistral AI published at release, and a context window of 128K tokens that fits at least 30 high-resolution images.
import { streamText } from 'ai'
const result = streamText({ model: 'mistral/pixtral-large', prompt: 'Why is the sky blue?'})Frequently Asked Questions
What is Pixtral Large's MathVista score?
69.4% on MathVista in Mistral AI's Pixtral Large announcement.
How many images can Pixtral Large process per request?
At least 30 high-resolution images within the context window of 128K tokens.
What is Pixtral Large's vision encoder size?
One billion parameters, 2.5x larger than Pixtral 12B's 400M encoder.
Does Pixtral Large maintain text-only quality?
Yes. Text-only performance is comparable to Mistral AI Large 2. Adding vision capability doesn't degrade Pixtral Large's language understanding.
What license covers Pixtral Large?
Mistral AI Research License (MRL) for research and education. A Mistral AI Commercial License is available for production deployments.
How does Pixtral Large perform on document understanding?
In Mistral AI's published evaluations at release, Pixtral Large's DocVQA and ChartQA scores were ahead of several proprietary multimodal models in the comparison set, including GPT-4o and Gemini-1.5 Pro. On the LMSys Vision Leaderboard, Pixtral Large led other open-weights models by approximately 50 ELO points.
Is Pixtral Large still actively maintained?
Mistral AI has designated Pixtral Large as deprecated in favor of newer multimodal models. Pixtral Large remains accessible through AI Gateway for existing integrations.