Voyage 4 Large
Voyage 4 Large is Voyage AI's Voyage 4 flagship embedding model. It uses a mixture-of-experts (MoE) architecture. Voyage AI reports state-of-the-art general retrieval in their published benchmarks, with serving costs about 40% lower than comparable dense models, and average gains over OpenAI text-embedding-3-large, Cohere Embed v4, and Gemini Embedding 001 in the same comparison. It shares one embedding space with voyage-4 and voyage-4-lite.
import { embed } from 'ai';
const result = await embed({ model: 'voyage/voyage-4-large', value: 'Sunny day at the beach',})Frequently Asked Questions
What is the difference between Voyage 4 Large and voyage-4?
Voyage 4 Large is the MoE flagship with the highest average scores in Voyage AI's published Voyage 4 comparison.
voyage-4is the mid-sized model. Both share the same embedding space asvoyage-4-lite.How does Voyage 4 Large compare to voyage-3-large?
Voyage AI reports better retrieval accuracy than voyage-3-large at a lower price, using MoE and the Voyage 4 training stack. Moving from Voyage 3 to Voyage 4 requires re-embedding because the embedding space changes.
What is the context window for Voyage 4 Large?
32K tokens. Size chunks so single requests stay under this limit on long documents.
When should I use Voyage 4 Large over voyage-4-lite?
Use Voyage 4 Large when you need the strongest published Voyage 4 vectors, especially for one-time or infrequent document embedding. Use
voyage-4-litewhen you want fewer parameters for queries or symmetric indexing at lower compute.How do I access Voyage 4 Large through Vercel AI Gateway?
Add your Voyage AI API key in AI Gateway settings, then send embedding requests through AI Gateway. AI Gateway authenticates requests and records usage.
Do I need to re-embed my data to switch from voyage-3-large?
Yes. Moving from Voyage 3 to Voyage 4 requires re-embedding because the embedding space is new. Within Voyage 4, you can often keep
voyage-4-largedocument vectors and change query models if you use asymmetric retrieval.Is Voyage 4 Large suitable for RAG applications?
Yes. Voyage AI positions it for retrieval-augmented generation and high-accuracy document indexing, including asymmetric setups where queries use a smaller Voyage 4 model.
What is mixture-of-experts in Voyage 4 Large?
Voyage 4 Large routes tokens through expert subnetworks so Voyage AI can raise accuracy while reporting serving costs about 40% lower than comparable dense models.