text-embedding-3-small
text-embedding-3-small delivers higher MTEB scores than ada-002 at lower cost, with a 1536-dimension default that drops into existing pipelines and a flexible dimensions parameter for further storage savings.
import { embed } from 'ai';
const result = await embed({ model: 'openai/text-embedding-3-small', value: 'Sunny day at the beach',})Frequently Asked Questions
Is text-embedding-3-small a direct replacement for ada-002?
Yes. The default output is 1536 dimensions, same as ada-002, so existing vector indexes work without rebuilding. You get a higher MTEB score and immediate cost savings.
How much does the multilingual retrieval improve over ada-002?
MIRACL scores go from 31.4% to 44.0%. For pipelines that handle queries or documents in multiple languages, this is a meaningful quality improvement that comes free with the model swap.
When does it make sense to pay for text-embedding-3-large instead?
When your application's quality is bottlenecked by embedding accuracy, for example, legal search, scientific literature retrieval, or high-stakes recommendation systems where a 2-point MTEB difference translates to noticeably better results.
Can I reduce the vector dimensions below 1536?
Yes. The
dimensionsparameter accepts any value below the default. Matryoshka training ensures the truncated vectors retain useful semantic structure, which is helpful for reducing storage costs in large indexes.What are typical latency characteristics?
This page shows live throughput and time-to-first-token metrics measured across real AI Gateway embedding traffic.