text-embedding-3-large
text-embedding-3-large produces 3072-dimensional vectors with the highest MTEB and MIRACL scores in the text-embedding-3 family, with built-in Matryoshka dimension reduction for flexible quality-storage tradeoffs in production retrieval systems.
import { embed } from 'ai';
const result = await embed({ model: 'openai/text-embedding-3-large', value: 'Sunny day at the beach',})What To Consider When Choosing a Provider
- Configuration: A practical workflow: embed your corpus at the full 3072 dimensions for archival quality. Then use the
dimensionsparameter at query time to benchmark whether 256, 512, or 1024 dimensions produce acceptable recall for your dataset. This lets you tune the accuracy-storage curve without re-indexing. - Zero Data Retention: AI Gateway supports Zero Data Retention for this model via direct gateway requests (BYOK is not included). To configure this, check the documentation.
- Authentication: AI Gateway authenticates requests using an API key or OIDC token. You do not need to manage provider credentials directly.
When to Use text-embedding-3-large
Best For
- RAG and semantic search: Pipelines where the quality of retrieved passages directly determines output quality
- Multilingual retrieval: Cross-lingual search that benefits from the 23.5-point MIRACL gain over ada-002
- Large-scale vector databases: Indexes that benefit from tunable
dimensionsto balance precision against storage cost - Recommendation systems: Similarity scoring that demands higher embedding fidelity than text-embedding-3-small
- Ada-002 migration: Teams that want the maximum quality step-up in a single change
Consider Alternatives When
- Tight cost constraint: The smaller variant runs at roughly 6.5x lower cost per token
- Short, simple texts: The quality gap between large and small models becomes negligible on simple content
- Latency-critical queries: A lighter model fits your SLA better when query-time latency is the bottleneck
Conclusion
text-embedding-3-large delivers the highest embedding quality in the text-embedding-3 family with the flexibility to shrink vectors when full fidelity isn't required. For retrieval-critical applications on AI Gateway, particularly those spanning multiple languages, it provides a meaningful accuracy step up over text-embedding-3-small.
Frequently Asked Questions
How does Matryoshka dimension reduction work in practice?
The model encodes the most semantically important information into the first dimensions of each vector. When you request fewer dimensions via the
dimensionsparameter, you get a truncated vector that retains strong semantic structure. A 256-dimension vector from this model outperforms a full 1536-dimension ada-002 embedding on MTEB.What is the MIRACL benchmark and why does the score matter?
MIRACL evaluates retrieval accuracy across multiple languages. text-embedding-3-large scores 54.9% versus ada-002's 31.4%, a 23.5-point gap that translates to substantially better search results when queries and documents are in different languages.
Can I embed at full 3072 dimensions and query at a lower dimension?
Yes, but the query and document dimensions must match at search time. The recommended approach is to embed your corpus at 3072 for archival accuracy, then re-embed queries at a test dimension to evaluate recall before committing to a reduced index.
How many dimensions should I use for my application?
It depends on your recall requirements and infrastructure constraints. Start at 3072 and measure recall. If it exceeds your threshold at 1024 or 512, use the smaller size to save storage and speed up lookups. There is no universal right answer; the tradeoff is application-specific.
Does text-embedding-3-large support batch requests?
Yes. Multiple texts can be embedded in a single API call. For indexing pipelines processing millions of documents, batching is the standard approach to maximize throughput.
What are typical latency characteristics?
This page shows live throughput and time-to-first-token metrics measured across real AI Gateway embedding traffic.