# Vercel AI Gateway Supported Providers and Models

> List of all 264 models available through the [Vercel AI Gateway](https://vercel.com/ai-gateway).
> For API documentation, see https://vercel.com/docs/ai-gateway

| Model | Type | Context | Input | Output | Providers | Tags |
|-------|------|---------|-------|--------|-----------|------|
| alibaba/qwen-3-32b | chat | 131.1K | $0.10/1M | $0.30/1M | bedrock, alibaba, deepinfra, groq | reasoning, tool-use |
| alibaba/qwen3-coder-30b-a3b | chat | 262.1K | $0.15/1M | $0.60/1M | bedrock, novita | reasoning, tool-use |
| alibaba/qwen3-max-thinking | chat | 256K | $1.20/1M | $6.00/1M | alibaba | reasoning, tool-use, implicit-caching |
| alibaba/qwen3.5-flash | chat | 1M | $0.10/1M | $0.40/1M | alibaba | vision, explicit-caching, file-input, reasoning, tool-use |
| alibaba/qwen3.5-plus | chat | 1M | $0.40/1M | $2.40/1M | alibaba | vision, explicit-caching, file-input, reasoning, tool-use |
| alibaba/qwen3.6-plus | chat | 1M | $0.50/1M | $3.00/1M | alibaba, fireworks | reasoning, tool-use, implicit-caching, vision, file-input |
| alibaba/qwen-3-235b | chat | 131.1K | $0.07/1M | $0.46/1M | novita, cerebras, deepinfra | reasoning, tool-use |
| alibaba/qwen3-235b-a22b-thinking | chat | 262.1K | $0.23/1M | $2.30/1M | novita, deepinfra | vision, tool-use, file-input |
| alibaba/qwen3-coder | chat | 262.1K | $0.30/1M | $1.60/1M | deepinfra, novita, alibaba | tool-use |
| alibaba/qwen3-coder-next | chat | 256K | $0.50/1M | $1.20/1M | togetherai, bedrock | reasoning, tool-use |
| alibaba/qwen3-coder-plus | chat | 1M | $1.00/1M | $5.00/1M | alibaba | tool-use |
| alibaba/qwen3-embedding-0.6b | embedding | 32.8K | $0.01/1M | $0.00/1M | deepinfra |  |
| alibaba/qwen3-embedding-4b | embedding | 32.8K | $0.02/1M | $0.00/1M | deepinfra |  |
| alibaba/qwen3-embedding-8b | embedding | 32.8K | $0.05/1M | $0.00/1M | deepinfra |  |
| alibaba/qwen3-max | chat | 262.1K | $1.20/1M | $6.00/1M | alibaba, novita | tool-use, implicit-caching |
| alibaba/qwen3-max-preview | chat | 262.1K | $1.20/1M | $6.00/1M | alibaba | tool-use, implicit-caching |
| alibaba/qwen3-next-80b-a3b-instruct | chat | 262.1K | $0.09/1M | $1.10/1M | alibaba, novita, deepinfra |  |
| alibaba/qwen3-next-80b-a3b-thinking | chat | 131.1K | $0.15/1M | $1.20/1M | alibaba, novita |  |
| alibaba/qwen3-vl-instruct | chat | 262.1K | $0.20/1M | $0.88/1M | alibaba, novita, fireworks, deepinfra | vision |
| alibaba/qwen3-vl-thinking | chat | 131.1K | $0.40/1M | $3.95/1M | alibaba, novita | vision, reasoning, tool-use |
| alibaba/qwen-3-14b | chat | 41.0K | $0.12/1M | $0.24/1M | deepinfra | reasoning, tool-use |
| alibaba/qwen-3-30b | chat | 41.0K | $0.08/1M | $0.29/1M | deepinfra | reasoning, tool-use |
| alibaba/wan-v2.5-t2v-preview | video | 0 | $0.00/1M | $0.00/1M | alibaba | text-to-video |
| alibaba/wan-v2.6-i2v | video | 0 | $0.00/1M | $0.00/1M | alibaba | image-to-video |
| alibaba/wan-v2.6-i2v-flash | video | 0 | $0.00/1M | $0.00/1M | alibaba | image-to-video |
| alibaba/wan-v2.6-r2v | video | 0 | $0.00/1M | $0.00/1M | alibaba | reference-to-video |
| alibaba/wan-v2.6-r2v-flash | video | 0 | $0.00/1M | $0.00/1M | alibaba | reference-to-video |
| alibaba/wan-v2.6-t2v | video | 0 | $0.00/1M | $0.00/1M | alibaba | text-to-video |
| amazon/nova-2-lite | chat | 1M | $0.30/1M | $2.50/1M | bedrock | reasoning, vision |
| amazon/nova-lite | chat | 300K | $0.06/1M | $0.24/1M | bedrock |  |
| amazon/nova-micro | chat | 128K | $0.04/1M | $0.14/1M | bedrock |  |
| amazon/nova-pro | chat | 300K | $0.80/1M | $3.20/1M | bedrock |  |
| amazon/titan-embed-text-v2 | embedding | 0 | $0.02/1M | $0.00/1M | bedrock |  |
| anthropic/claude-3-haiku | chat | 200K | $0.25/1M | $1.25/1M | anthropic, bedrock, vertexAnthropic | tool-use, vision, explicit-caching |
| anthropic/claude-3.5-haiku | chat | 200K | $0.80/1M | $4.00/1M | bedrock, vertexAnthropic | file-input, tool-use, vision, explicit-caching |
| anthropic/claude-3.7-sonnet | chat | 200K | $3.00/1M | $15.00/1M | bedrock, vertexAnthropic | file-input, reasoning, tool-use, vision, explicit-caching |
| anthropic/claude-haiku-4.5 | chat | 200K | $1.00/1M | $5.00/1M | anthropic, bedrock, vertexAnthropic | file-input, reasoning, tool-use, vision, explicit-caching |
| anthropic/claude-opus-4 | chat | 200K | $15.00/1M | $75.00/1M | anthropic, bedrock, vertexAnthropic | file-input, reasoning, tool-use, vision, explicit-caching |
| anthropic/claude-opus-4.1 | chat | 200K | $15.00/1M | $75.00/1M | anthropic, bedrock, vertexAnthropic | file-input, reasoning, tool-use, vision, explicit-caching |
| anthropic/claude-opus-4.5 | chat | 200K | $5.00/1M | $25.00/1M | anthropic, bedrock, vertexAnthropic | tool-use, reasoning, vision, file-input, explicit-caching |
| anthropic/claude-opus-4.6 | chat | 1M | $5.00/1M | $25.00/1M | anthropic, bedrock, vertexAnthropic | tool-use, reasoning, vision, file-input, explicit-caching, web-search |
| anthropic/claude-sonnet-4 | chat | 1M | $3.00/1M | $15.00/1M | anthropic, bedrock, vertexAnthropic | file-input, reasoning, tool-use, vision, explicit-caching |
| anthropic/claude-sonnet-4.5 | chat | 1M | $3.00/1M | $15.00/1M | anthropic, bedrock, vertexAnthropic | file-input, reasoning, tool-use, vision, explicit-caching |
| anthropic/claude-sonnet-4.6 | chat | 1M | $3.00/1M | $15.00/1M | anthropic, vertexAnthropic, bedrock | file-input, reasoning, tool-use, vision, explicit-caching, web-search |
| arcee-ai/trinity-large-preview | chat | 131K | $0.25/1M | $1.00/1M | arcee-ai | tool-use |
| arcee-ai/trinity-large-thinking | chat | 262.1K | $0.25/1M | $0.90/1M | arcee-ai | reasoning, tool-use, implicit-caching |
| arcee-ai/trinity-mini | chat | 131.1K | $0.04/1M | $0.15/1M | arcee-ai |  |
| bfl/flux-pro-1.0-fill | image | 0 | $0.00/1M | $0.00/1M | bfl | image-generation |
| bfl/flux-kontext-max | image | 512 | $0.00/1M | $0.00/1M | bfl | image-generation |
| bfl/flux-kontext-pro | image | 512 | $0.00/1M | $0.00/1M | bfl, prodia | image-generation |
| bfl/flux-2-flex | image | 0 | $0.00/1M | $0.00/1M | bfl | image-generation |
| bfl/flux-2-klein-4b | image | 0 | $0.00/1M | $0.00/1M | bfl | image-generation |
| bfl/flux-2-klein-9b | image | 0 | $0.00/1M | $0.00/1M | bfl | image-generation |
| bfl/flux-2-max | image | 67.3K | $0.00/1M | $0.00/1M | bfl | image-generation |
| bfl/flux-2-pro | image | 67.3K | $0.00/1M | $0.00/1M | bfl | image-generation |
| bfl/flux-pro-1.1 | image | 0 | $0.00/1M | $0.00/1M | bfl | image-generation |
| bfl/flux-pro-1.1-ultra | image | 0 | $0.00/1M | $0.00/1M | bfl | image-generation |
| bytedance/seed-1.8 | chat | 256K | $0.25/1M | $2.00/1M | bytedance | reasoning, vision, implicit-caching, tiered-cost |
| bytedance/seed-1.6 | chat | 256K | $0.25/1M | $2.00/1M | bytedance | reasoning, tool-use, implicit-caching, tiered-cost |
| bytedance/seedance-v1.0-lite-i2v | video | 0 | $0.00/1M | $0.00/1M | bytedance | video-generation |
| bytedance/seedance-v1.0-lite-t2v | video | 0 | $0.00/1M | $0.00/1M | bytedance | video-generation |
| bytedance/seedance-v1.0-pro | video | 0 | $0.00/1M | $0.00/1M | bytedance | video-generation |
| bytedance/seedance-v1.0-pro-fast | video | 0 | $0.00/1M | $0.00/1M | bytedance | video-generation |
| bytedance/seedance-v1.5-pro | video | 0 | $0.00/1M | $0.00/1M | bytedance | video-generation |
| bytedance/seedream-4.0 | image | 0 | $0.00/1M | $0.00/1M | bytedance | image-generation |
| bytedance/seedream-4.5 | image | 0 | $0.00/1M | $0.00/1M | bytedance | image-generation |
| bytedance/seedream-5.0-lite | image | 0 | $0.00/1M | $0.00/1M | bytedance | image-generation |
| cohere/rerank-v3.5 | reranking | 4.1K | $0.00/1M | $0.00/1M | bedrock | reranking |
| cohere/rerank-v4-fast | reranking | 32K | $0.00/1M | $0.00/1M | cohere | reranking |
| cohere/rerank-v4-pro | reranking | 32K | $0.00/1M | $0.00/1M | cohere | reranking |
| cohere/command-a | chat | 256K | $2.50/1M | $10.00/1M | cohere | tool-use |
| cohere/embed-v4.0 | embedding | 0 | $0.12/1M | $0.00/1M | cohere |  |
| deepseek/deepseek-r1 | chat | 160K | $1.35/1M | $5.40/1M | deepinfra, bedrock | reasoning, implicit-caching |
| deepseek/deepseek-v3 | chat | 163.8K | $0.77/1M | $0.77/1M | baseten, novita | tool-use |
| deepseek/deepseek-v3.1 | chat | 163.8K | $0.50/1M | $1.50/1M | deepinfra, novita, baseten, fireworks, togetherai, sambanova | reasoning, tool-use |
| deepseek/deepseek-v3.1-terminus | chat | 131.1K | $0.27/1M | $1.00/1M | novita | reasoning, tool-use |
| deepseek/deepseek-v3.2 | chat | 163.8K | $0.28/1M | $0.42/1M | deepseek, deepinfra, novita, bedrock | tool-use, implicit-caching |
| deepseek/deepseek-v3.2-thinking | chat | 128K | $0.28/1M | $0.42/1M | deepseek | reasoning, tool-use, implicit-caching |
| google/gemini-2.0-flash | chat | 1.0M | $0.15/1M | $0.60/1M | vertex, google | file-input, tool-use, vision, web-search |
| google/gemini-2.0-flash-lite | chat | 1.0M | $0.07/1M | $0.30/1M | vertex, google | file-input, tool-use, vision, web-search |
| google/gemini-2.5-flash | chat | 1M | $0.30/1M | $2.50/1M | vertex, google, deepinfra | file-input, reasoning, tool-use, vision, web-search, implicit-caching |
| google/gemini-2.5-flash-lite | chat | 1.0M | $0.10/1M | $0.40/1M | vertex, google | file-input, reasoning, tool-use, vision, web-search, implicit-caching |
| google/gemini-2.5-flash-lite-preview-09-2025 | chat | 1.0M | $0.10/1M | $0.40/1M | google, vertex | file-input, reasoning, tool-use, vision, web-search, implicit-caching |
| google/gemini-2.5-flash-preview-09-2025 | chat | 1M | $0.30/1M | $2.50/1M | google, vertex | file-input, implicit-caching, reasoning, tool-use, vision, web-search |
| google/gemini-2.5-pro | chat | 1.0M | $1.25/1M | $10.00/1M | vertex, google, deepinfra | file-input, reasoning, tool-use, vision, web-search, tiered-cost, implicit-caching |
| google/gemini-3-flash | chat | 1M | $0.50/1M | $3.00/1M | vertex, google | reasoning, tool-use, file-input, vision, web-search, tiered-cost, implicit-caching |
| google/gemini-3-pro-preview | chat | 1M | $2.00/1M | $12.00/1M | google, vertex | file-input, tool-use, reasoning, vision, web-search, tiered-cost, implicit-caching |
| google/gemini-3.1-flash-image-preview | chat | 131.1K | $0.50/1M | $3.00/1M | google, vertex | image-generation, web-search, reasoning, vision |
| google/gemini-3.1-flash-lite-preview | chat | 1M | $0.25/1M | $1.50/1M | google, vertex | reasoning, tool-use, implicit-caching, file-input, vision, web-search |
| google/gemini-3.1-pro-preview | chat | 1M | $2.00/1M | $12.00/1M | google, vertex | file-input, tool-use, reasoning, vision, web-search, tiered-cost, implicit-caching |
| google/gemini-embedding-001 | embedding | 0 | $0.15/1M | $0.00/1M | google, vertex |  |
| google/gemini-embedding-2 | embedding | 0 | $0.20/1M | $0.00/1M | google |  |
| google/gemma-4-26b-a4b-it | chat | 262.1K | $0.13/1M | $0.40/1M | novita, parasail | vision, tool-use, file-input |
| google/gemma-4-31b-it | chat | 262.1K | $0.14/1M | $0.40/1M | novita, parasail | tool-use, vision, file-input |
| google/imagen-4.0-generate-001 | image | 480 | $0.00/1M | $0.00/1M | vertex | image-generation |
| google/imagen-4.0-fast-generate-001 | image | 480 | $0.00/1M | $0.00/1M | vertex | image-generation |
| google/imagen-4.0-ultra-generate-001 | image | 480 | $0.00/1M | $0.00/1M | vertex | image-generation |
| google/gemini-2.5-flash-image | chat | 32.8K | $0.30/1M | $2.50/1M | google, vertex | image-generation, web-search |
| google/gemini-3-pro-image | chat | 65.5K | $2.00/1M | $12.00/1M | google, vertex | image-generation, web-search |
| google/text-embedding-005 | embedding | 0 | $0.03/1M | $0.00/1M | vertex |  |
| google/text-multilingual-embedding-002 | embedding | 0 | $0.03/1M | $0.00/1M | vertex |  |
| google/veo-3.0-generate-001 | video | 0 | $0.00/1M | $0.00/1M | vertex | text-to-video, image-to-video |
| google/veo-3.0-fast-generate-001 | video | 0 | $0.00/1M | $0.00/1M | vertex | video-generation |
| google/veo-3.1-generate-001 | video | 0 | $0.00/1M | $0.00/1M | vertex | text-to-video, image-to-video |
| google/veo-3.1-fast-generate-001 | video | 0 | $0.00/1M | $0.00/1M | vertex | text-to-video, image-to-video |
| inception/mercury-2 | chat | 128K | $0.25/1M | $0.75/1M | inception | tool-use, reasoning |
| inception/mercury-coder-small | chat | 32K | $0.25/1M | $1.00/1M | inception | tool-use |
| klingai/kling-v2.5-turbo-i2v | video | 0 | $0.00/1M | $0.00/1M | klingai | image-to-video, audio-generation |
| klingai/kling-v2.5-turbo-t2v | video | 0 | $0.00/1M | $0.00/1M | klingai | text-to-video, audio-generation |
| klingai/kling-v2.6-i2v | video | 0 | $0.00/1M | $0.00/1M | klingai | image-to-video, audio-generation |
| klingai/kling-v2.6-motion-control | video | 0 | $0.00/1M | $0.00/1M | klingai | video-generation |
| klingai/kling-v2.6-t2v | video | 0 | $0.00/1M | $0.00/1M | klingai | text-to-video, audio-generation |
| klingai/kling-v3.0-i2v | video | 0 | $0.00/1M | $0.00/1M | klingai | image-to-video, multi-shot, audio-generation |
| klingai/kling-v3.0-t2v | video | 0 | $0.00/1M | $0.00/1M | klingai | text-to-video, multi-shot, audio-generation |
| kwaipilot/kat-coder-pro-v2 | chat | 256K | $0.30/1M | $1.20/1M | streamlake | tool-use, reasoning, implicit-caching |
| kwaipilot/kat-coder-pro-v1 | chat | 256K | $0.03/1M | $1.20/1M | novita, streamlake | reasoning |
| meituan/longcat-flash-chat | chat | 128K | $0.00/1M | $0.00/1M | meituan | tool-use |
| meituan/longcat-flash-thinking-2601 | chat | 32.8K | $0.00/1M | $0.00/1M | meituan | reasoning |
| meta/llama-3.1-70b | chat | 131.1K | $0.72/1M | $0.72/1M | bedrock, deepinfra | tool-use |
| meta/llama-3.1-8b | chat | 131.1K | $0.10/1M | $0.10/1M | cerebras, groq, bedrock, deepinfra, novita | tool-use |
| meta/llama-3.2-11b | chat | 128K | $0.16/1M | $0.16/1M | bedrock | tool-use, vision |
| meta/llama-3.2-1b | chat | 128K | $0.10/1M | $0.10/1M | bedrock |  |
| meta/llama-3.2-3b | chat | 128K | $0.15/1M | $0.15/1M | bedrock |  |
| meta/llama-3.2-90b | chat | 128K | $0.72/1M | $0.72/1M | bedrock | tool-use, vision |
| meta/llama-3.3-70b | chat | 128K | $0.59/1M | $0.72/1M | bedrock, groq | tool-use |
| meta/llama-4-maverick | chat | 131.1K | $0.24/1M | $0.97/1M | deepinfra, bedrock | tool-use, vision |
| meta/llama-4-scout | chat | 131.1K | $0.17/1M | $0.66/1M | deepinfra, groq, bedrock | tool-use, vision |
| minimax/minimax-m2 | chat | 205K | $0.30/1M | $1.20/1M | minimax, novita | reasoning, tool-use, implicit-caching |
| minimax/minimax-m2.1 | chat | 204.8K | $0.30/1M | $1.20/1M | minimax, fireworks, novita, bedrock | reasoning, tool-use, implicit-caching |
| minimax/minimax-m2.1-lightning | chat | 204.8K | $0.30/1M | $2.40/1M | minimax | reasoning, tool-use, implicit-caching |
| minimax/minimax-m2.5 | chat | 1M | $0.27/1M | $0.95/1M | minimax, nebius, parasail, deepinfra, bedrock | reasoning, tool-use, implicit-caching |
| minimax/minimax-m2.5-highspeed | chat | 204.8K | $0.60/1M | $2.40/1M | minimax | reasoning, tool-use, implicit-caching |
| minimax/minimax-m2.7 | chat | 204.8K | $0.30/1M | $1.20/1M | minimax | reasoning, tool-use, implicit-caching, file-input, vision |
| minimax/minimax-m2.7-highspeed | chat | 204.8K | $0.60/1M | $2.40/1M | minimax | reasoning, tool-use, implicit-caching, vision, fille-input |
| mistral/codestral-embed | embedding | 0 | $0.15/1M | $0.00/1M | mistral |  |
| mistral/devstral-2 | chat | 256K | $0.40/1M | $2.00/1M | mistral | tool-use |
| mistral/devstral-small | chat | 128K | $0.10/1M | $0.30/1M | mistral | tool-use |
| mistral/devstral-small-2 | chat | 256K | $0.10/1M | $0.30/1M | mistral | tool-use |
| mistral/magistral-medium | chat | 128K | $2.00/1M | $5.00/1M | mistral | reasoning, vision |
| mistral/magistral-small | chat | 128K | $0.50/1M | $1.50/1M | mistral | reasoning, vision |
| mistral/ministral-14b | chat | 256K | $0.20/1M | $0.20/1M | mistral | vision, file-input |
| mistral/ministral-3b | chat | 128K | $0.10/1M | $0.10/1M | mistral | tool-use |
| mistral/ministral-8b | chat | 128K | $0.15/1M | $0.15/1M | mistral | tool-use |
| mistral/codestral | chat | 128K | $0.30/1M | $0.90/1M | mistral | tool-use |
| mistral/mistral-embed | embedding | 0 | $0.10/1M | $0.00/1M | mistral |  |
| mistral/mistral-large-3 | chat | 256K | $0.50/1M | $1.50/1M | mistral | vision |
| mistral/mistral-medium | chat | 128K | $0.40/1M | $2.00/1M | mistral | tool-use, vision |
| mistral/mistral-nemo | chat | 131.1K | $0.15/1M | $0.15/1M | novita, mistral, deepinfra | tool-use |
| mistral/mistral-small | chat | 32K | $0.10/1M | $0.30/1M | mistral | tool-use, vision |
| mistral/mixtral-8x22b-instruct | chat | 65.5K | $1.20/1M | $1.20/1M | fireworks |  |
| mistral/pixtral-12b | chat | 128K | $0.15/1M | $0.15/1M | mistral | tool-use, vision |
| mistral/pixtral-large | chat | 128K | $2.00/1M | $6.00/1M | mistral | tool-use, vision |
| moonshotai/kimi-k2 | chat | 131.1K | $0.57/1M | $2.30/1M | parasail, novita | tool-use |
| moonshotai/kimi-k2-0905 | chat | 256K | $0.60/1M | $2.50/1M | fireworks | tool-use |
| moonshotai/kimi-k2-thinking | chat | 262.1K | $0.60/1M | $2.50/1M | moonshotai, fireworks, deepinfra | reasoning, tool-use, implicit-caching |
| moonshotai/kimi-k2-thinking-turbo | chat | 262.1K | $1.15/1M | $8.00/1M | moonshotai | reasoning, tool-use, implicit-caching |
| moonshotai/kimi-k2-turbo | chat | 256K | $1.15/1M | $8.00/1M | moonshotai | tool-use |
| moonshotai/kimi-k2.5 | chat | 262.1K | $0.50/1M | $2.80/1M | moonshotai, fireworks, novita, togetherai, bedrock | reasoning, vision, tool-use, implicit-caching |
| morph/morph-v3-fast | chat | 81.9K | $0.80/1M | $1.20/1M | morph |  |
| morph/morph-v3-large | chat | 81.9K | $0.90/1M | $1.90/1M | morph |  |
| nvidia/nemotron-3-nano-30b-a3b | chat | 262.1K | $0.05/1M | $0.24/1M | deepinfra | reasoning |
| nvidia/nemotron-3-super-120b-a12b | chat | 256K | $0.15/1M | $0.65/1M | bedrock |  |
| nvidia/nemotron-nano-12b-v2-vl | chat | 131.1K | $0.20/1M | $0.60/1M | deepinfra, bedrock | reasoning, tool-use, vision |
| nvidia/nemotron-nano-9b-v2 | chat | 131.1K | $0.06/1M | $0.23/1M | bedrock, deepinfra | reasoning, tool-use |
| openai/gpt-4o-mini-search-preview | chat | 128K | $0.15/1M | $0.60/1M | openai | web-search |
| openai/gpt-5-chat | chat | 128K | $1.25/1M | $10.00/1M | azure, openai | tool-use, implicit-caching, file-input, vision, reasoning |
| openai/gpt-5.1-codex-max | chat | 400K | $1.25/1M | $10.00/1M | openai, azure | reasoning, file-input, tool-use, vision, web-search, implicit-caching |
| openai/gpt-5.1-codex-mini | chat | 400K | $0.25/1M | $2.00/1M | azure, openai | reasoning, file-input, vision, tool-use, implicit-caching |
| openai/gpt-5.1-thinking | chat | 400K | $1.25/1M | $10.00/1M | openai, azure | tool-use, implicit-caching, file-input, reasoning, vision, web-search, image-generation |
| openai/gpt-5.2 | chat | 400K | $1.75/1M | $14.00/1M | azure, openai | tool-use, vision, file-input, reasoning, implicit-caching |
| openai/gpt-5.2-pro | chat | 400K | $21.00/1M | $168.00/1M | openai | tool-use, vision, implicit-caching, reasoning, file-input, web-search |
| openai/gpt-5.2-chat | chat | 128K | $1.75/1M | $14.00/1M | azure, openai | vision, file-input, tool-use, reasoning, implicit-caching |
| openai/gpt-5.2-codex | chat | 400K | $1.75/1M | $14.00/1M | azure, openai | reasoning, tool-use, implicit-caching, vision, file-input |
| openai/gpt-5.3-codex | chat | 400K | $1.75/1M | $14.00/1M | openai, azure | reasoning, tool-use, file-input, vision, web-search, implicit-caching |
| openai/gpt-5.4 | chat | 1.1M | $2.50/1M | $15.00/1M | openai, azure | reasoning, tool-use, vision, file-input, implicit-caching, web-search |
| openai/gpt-5.4-mini | chat | 400K | $0.75/1M | $4.50/1M | openai, azure | reasoning, tool-use, vision, file-input, implicit-caching, web-search |
| openai/gpt-5.4-nano | chat | 400K | $0.20/1M | $1.25/1M | openai, azure | reasoning, tool-use, implicit-caching, web-search, vision, file-input |
| openai/gpt-5.4-pro | chat | 1.1M | $30.00/1M | $180.00/1M | openai, azure | reasoning, tool-use, vision, file-input, implicit-caching, web-search |
| openai/gpt-image-1 | image | 0 | $5.00/1M | $40.00/1M | openai | image-generation |
| openai/gpt-image-1-mini | image | 0 | $2.00/1M | $8.00/1M | openai | image-generation |
| openai/gpt-image-1.5 | image | 0 | $5.00/1M | $32.00/1M | openai | image-generation |
| openai/gpt-3.5-turbo | chat | 16.4K | $0.50/1M | $1.50/1M | openai |  |
| openai/gpt-3.5-turbo-instruct | chat | 8.2K | $1.50/1M | $2.00/1M | openai |  |
| openai/gpt-4-turbo | chat | 128K | $10.00/1M | $30.00/1M | openai | tool-use, vision |
| openai/gpt-4.1 | chat | 1.0M | $2.00/1M | $8.00/1M | azure, openai | file-input, tool-use, vision |
| openai/gpt-4.1-mini | chat | 1.0M | $0.40/1M | $1.60/1M | azure, openai | file-input, tool-use, vision, implicit-caching |
| openai/gpt-4.1-nano | chat | 1.0M | $0.10/1M | $0.40/1M | azure, openai | file-input, tool-use, vision, implicit-caching |
| openai/gpt-4o | chat | 128K | $2.50/1M | $10.00/1M | azure, openai | file-input, tool-use, vision, implicit-caching |
| openai/gpt-4o-mini | chat | 128K | $0.15/1M | $0.60/1M | azure, openai | file-input, tool-use, vision, implicit-caching |
| openai/gpt-5 | chat | 400K | $1.25/1M | $10.00/1M | azure, openai | file-input, reasoning, tool-use, vision, image-generation, implicit-caching |
| openai/gpt-5-mini | chat | 400K | $0.25/1M | $2.00/1M | azure, openai | file-input, reasoning, tool-use, vision, implicit-caching |
| openai/gpt-5-nano | chat | 400K | $0.05/1M | $0.40/1M | azure, openai | file-input, reasoning, tool-use, vision, image-generation, implicit-caching |
| openai/gpt-5-pro | chat | 400K | $15.00/1M | $120.00/1M | openai | file-input, implicit-caching, reasoning, tool-use, vision, image-generation, web-search |
| openai/gpt-5-codex | chat | 400K | $1.25/1M | $10.00/1M | azure, openai | file-input, reasoning, tool-use, vision, implicit-caching |
| openai/gpt-5.1-instant | chat | 128K | $1.25/1M | $10.00/1M | openai, azure | tool-use, vision, file-input, reasoning, implicit-caching, web-search |
| openai/gpt-5.1-codex | chat | 400K | $1.25/1M | $10.00/1M | openai, azure | file-input, tool-use, reasoning, vision, web-search, implicit-caching |
| openai/gpt-5.3-chat | chat | 128K | $1.75/1M | $14.00/1M | openai | vision, file-input, tool-use, reasoning, implicit-caching, web-search |
| openai/gpt-oss-120b | chat | 131.1K | $0.35/1M | $0.75/1M | baseten, bedrock, cerebras, fireworks, groq, parasail, nebius | reasoning, tool-use |
| openai/gpt-oss-20b | chat | 131.1K | $0.07/1M | $0.30/1M | bedrock, fireworks, groq, deepinfra, togetherai, novita, parasail | reasoning, tool-use |
| openai/gpt-oss-safeguard-20b | chat | 131.1K | $0.07/1M | $0.30/1M | groq | reasoning, tool-use |
| openai/o1 | chat | 200K | $15.00/1M | $60.00/1M | azure, openai | file-input, reasoning, tool-use, vision, implicit-caching |
| openai/o3 | chat | 200K | $2.00/1M | $8.00/1M | openai | file-input, reasoning, tool-use, vision, implicit-caching |
| openai/o3-pro | chat | 200K | $20.00/1M | $80.00/1M | openai | reasoning, vision, file-input, tool-use, web-search |
| openai/o3-deep-research | chat | 200K | $10.00/1M | $40.00/1M | openai | reasoning, file-input, tool-use, vision, implicit-caching |
| openai/o3-mini | chat | 200K | $1.10/1M | $4.40/1M | azure, openai | file-input, reasoning, tool-use, implicit-caching |
| openai/o4-mini | chat | 200K | $1.10/1M | $4.40/1M | azure, openai | file-input, reasoning, tool-use, vision, implicit-caching |
| openai/text-embedding-3-large | embedding | 0 | $0.13/1M | $0.00/1M | azure, openai |  |
| openai/text-embedding-3-small | embedding | 0 | $0.02/1M | $0.00/1M | azure, openai |  |
| openai/text-embedding-ada-002 | embedding | 0 | $0.10/1M | $0.00/1M | azure, openai |  |
| perplexity/sonar | chat | 127K | $0.00/1M | $0.00/1M | perplexity | tool-use, vision |
| perplexity/sonar-pro | chat | 200K | $0.00/1M | $0.00/1M | perplexity | tool-use, vision |
| perplexity/sonar-reasoning-pro | chat | 127K | $0.00/1M | $0.00/1M | perplexity | reasoning |
| prime-intellect/intellect-3 | chat | 131.1K | $0.20/1M | $1.10/1M | parasail | reasoning, tool-use |
| prodia/flux-fast-schnell | image | 512 | $0.00/1M | $0.00/1M | prodia | image-generation |
| recraft/recraft-v2 | image | 0 | $0.00/1M | $0.00/1M | recraft | image-generation |
| recraft/recraft-v3 | image | 0 | $0.00/1M | $0.00/1M | recraft | image-generation |
| recraft/recraft-v4 | image | 0 | $0.00/1M | $0.00/1M | recraft | image-generation |
| recraft/recraft-v4-pro | image | 0 | $0.00/1M | $0.00/1M | recraft | image-generation |
| voyage/rerank-2.5 | reranking | 32K | $0.05/1M | $0.00/1M | voyage | reranking |
| voyage/rerank-2.5-lite | reranking | 32K | $0.02/1M | $0.00/1M | voyage | reranking |
| voyage/voyage-3-large | embedding | 0 | $0.18/1M | $0.00/1M | voyage |  |
| voyage/voyage-3.5 | embedding | 0 | $0.06/1M | $0.00/1M | voyage |  |
| voyage/voyage-3.5-lite | embedding | 0 | $0.02/1M | $0.00/1M | voyage |  |
| voyage/voyage-4 | embedding | 32K | $0.06/1M | $0.00/1M | voyage |  |
| voyage/voyage-4-large | embedding | 32K | $0.12/1M | $0.00/1M | voyage |  |
| voyage/voyage-4-lite | embedding | 32K | $0.02/1M | $0.00/1M | voyage |  |
| voyage/voyage-code-2 | embedding | 0 | $0.12/1M | $0.00/1M | voyage |  |
| voyage/voyage-code-3 | embedding | 0 | $0.18/1M | $0.00/1M | voyage |  |
| voyage/voyage-finance-2 | embedding | 0 | $0.12/1M | $0.00/1M | voyage |  |
| voyage/voyage-law-2 | embedding | 0 | $0.12/1M | $0.00/1M | voyage |  |
| xai/grok-3 | chat | 131.1K | $3.00/1M | $15.00/1M | xai | tool-use |
| xai/grok-3-fast | chat | 131.1K | $5.00/1M | $25.00/1M | xai | tool-use |
| xai/grok-3-mini | chat | 131.1K | $0.30/1M | $0.50/1M | xai | tool-use |
| xai/grok-3-mini-fast | chat | 131.1K | $0.60/1M | $4.00/1M | xai | tool-use |
| xai/grok-4 | chat | 256K | $3.00/1M | $15.00/1M | xai | reasoning, tool-use, vision |
| xai/grok-4-fast-non-reasoning | chat | 2M | $0.20/1M | $0.50/1M | xai | tool-use, implicit-caching, tiered-cost |
| xai/grok-4-fast-reasoning | chat | 2M | $0.20/1M | $0.50/1M | xai | reasoning, tool-use, implicit-caching, tiered-cost |
| xai/grok-4.1-fast-non-reasoning | chat | 2M | $0.20/1M | $0.50/1M | xai | tool-use, implicit-caching, tiered-cost |
| xai/grok-4.1-fast-reasoning | chat | 2M | $0.20/1M | $0.50/1M | xai | reasoning, tool-use, implicit-caching, tiered-cost |
| xai/grok-4.20-non-reasoning-beta | chat | 2M | $2.00/1M | $6.00/1M | xai | tool-use, implicit-caching, vision, file-input |
| xai/grok-4.20-reasoning-beta | chat | 2M | $2.00/1M | $6.00/1M | xai | reasoning, tool-use, vision, file-input, implicit-caching |
| xai/grok-4.20-multi-agent-beta | chat | 2M | $2.00/1M | $6.00/1M | xai | reasoning, tool-use, implicit-caching |
| xai/grok-4.20-multi-agent | chat | 2M | $2.00/1M | $6.00/1M | xai | reasoning, tool-use, implicit-caching |
| xai/grok-4.20-non-reasoning | chat | 2M | $2.00/1M | $6.00/1M | xai | tool-use, implicit-caching, vision, file-input |
| xai/grok-4.20-reasoning | chat | 2M | $2.00/1M | $6.00/1M | xai | reasoning, vision, tool-use, file-input, implicit-caching |
| xai/grok-code-fast-1 | chat | 256K | $0.20/1M | $1.50/1M | xai | reasoning, tool-use, implicit-caching |
| xai/grok-imagine-video | video | 0 | $0.00/1M | $0.00/1M | xai | image-to-video, text-to-video, video-editing, audio-generation |
| xai/grok-imagine-image | image | 0 | $0.00/1M | $0.00/1M | xai | image-generation |
| xai/grok-imagine-image-pro | image | 0 | $0.00/1M | $0.00/1M | xai | image |
| xiaomi/mimo-v2-flash | chat | 262.1K | $0.10/1M | $0.30/1M | novita, chutes, xiaomi | reasoning, tool-use |
| xiaomi/mimo-v2-pro | chat | 1M | $1.00/1M | $3.00/1M | xiaomi | reasoning, tool-use |
| zai/glm-4.5 | chat | 131.1K | $0.60/1M | $2.20/1M | zai, novita | reasoning, tool-use, implicit-caching |
| zai/glm-4.5-air | chat | 128K | $0.20/1M | $1.10/1M | zai | reasoning, tool-use, implicit-caching |
| zai/glm-4.5v | chat | 66K | $0.60/1M | $1.80/1M | novita, zai | reasoning, tool-use, vision |
| zai/glm-4.6 | chat | 204.8K | $0.60/1M | $2.20/1M | zai, deepinfra, baseten, novita | reasoning, tool-use, implicit-caching |
| zai/glm-4.7 | chat | 204.8K | $2.25/1M | $2.75/1M | zai, novita, deepinfra, cerebras, bedrock | reasoning, tool-use, implicit-caching |
| zai/glm-4.7-flash | chat | 200K | $0.07/1M | $0.40/1M | zai, bedrock | reasoning, tool-use |
| zai/glm-4.7-flashx | chat | 200K | $0.06/1M | $0.40/1M | zai | reasoning, tool-use, implicit-caching |
| zai/glm-5 | chat | 202.8K | $0.80/1M | $2.56/1M | zai, fireworks, deepinfra, parasail, togetherai, bedrock, novita | reasoning, tool-use, implicit-caching |
| zai/glm-5-turbo | chat | 202.8K | $1.20/1M | $4.00/1M | zai | reasoning, tool-use, implicit-caching |
| zai/glm-5.1 | chat | 202.8K | $1.40/1M | $4.40/1M | zai | reasoning, tool-use, implicit-caching |
| zai/glm-5v-turbo | chat | 200K | $1.20/1M | $4.00/1M | zai | reasoning, tool-use, implicit-caching, vision, file-input |
| zai/glm-4.6v | chat | 128K | $0.30/1M | $0.90/1M | zai | vision, file-input, reasoning, tool-use, implicit-caching |
| zai/glm-4.6v-flash | chat | 128K | $0.00/1M | $0.00/1M | zai | vision, reasoning, file-input, tool-use, implicit-caching |

---

## Detailed Model Information

### alibaba/qwen-3-32b

Qwen3-32B is a world-class model with comparable quality to DeepSeek R1 while outperforming GPT-4.1 and Claude Sonnet 3.7. It excels in code-gen, tool-calling, and advanced reasoning, making it an exceptional model for a wide range of production use cases.

- **Max output tokens:** 41.0K
- **Cached input cost:** $0.14/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen-3-32b

### alibaba/qwen3-coder-30b-a3b

Efficient coding specialist balancing performance with cost-effectiveness for daily development tasks while maintaining strong tool integration capabilities.

- **Max output tokens:** 32.8K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen3-coder-30b-a3b

### alibaba/qwen3-max-thinking

Compared with the snapshot as of September 23, 2025, the Qwen-3 series Max model in this release achieves an effective integration of thinking and non-thinking modes, resulting in a comprehensive and substantial improvement in the model’s overall performance. In thinking mode, the model simultaneously supports web search, web information extraction, and a code interpreter tool, enabling it to tackle more complex and challenging problems with greater accuracy by leveraging external tools while engaging in slow, deliberative reasoning. This version is based on a snapshot taken on January 23, 2026.

- **Max output tokens:** 65.5K
- **Cached input cost:** $0.24/1M
- **Details:** https://vercel.com/ai-gateway/models/qwen3-max-thinking

### alibaba/qwen3.5-flash

The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. Compared to the 3 series, these models deliver a leap forward in performance for both pure text and multimodal tasks, offering fast response times while balancing inference speed and overall performance.

- **Max output tokens:** 64K
- **Cached input cost:** $0.00/1M
- **Details:** https://vercel.com/ai-gateway/models/qwen3.5-flash

### alibaba/qwen3.5-plus

The Qwen3.5 native vision-language series Plus models are built on a hybrid architecture that integrates linear attention mechanisms with sparse mixture-of-experts models, achieving higher inference efficiency. In a variety of task evaluations, the 3.5 series consistently demonstrates performance on par with state-of-the-art leading models. Compared to the 3 series, these models show a leap forward in both pure-text and multimodal capabilities.

- **Max output tokens:** 64K
- **Cached input cost:** $0.04/1M
- **Details:** https://vercel.com/ai-gateway/models/qwen3.5-plus

### alibaba/qwen3.6-plus

The Qwen3.6 native vision-language Plus series models demonstrate exceptional performance on par with the current state-of-the-art models, with a significant improvement in overall results compared to the 3.5 series. The models have been markedly enhanced in code-related capabilities such as agentic coding, front-end programming, and Vibe coding, as well as in multi-modal general object recognition, OCR, and object localization.

- **Max output tokens:** 64K
- **Cached input cost:** $0.10/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen3.6-plus

### alibaba/qwen-3-235b

Qwen3-235B-A22B-Instruct-2507 is the updated version of the Qwen3-235B-A22B non-thinking mode, featuring Significant improvements in general capabilities, including instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage.

- **Max output tokens:** 40K
- **Cached input cost:** $0.60/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen-3-235b

### alibaba/qwen3-235b-a22b-thinking

Qwen3-235B-A22B-Thinking-2507 is the Qwen3's new model with scaling the thinking capability of Qwen3-235B-A22B, improving both the quality and depth of reasoning.

- **Max output tokens:** 262.1K
- **Cached input cost:** $0.20/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen3-235b-a22b-thinking

### alibaba/qwen3-coder

Qwen3-Coder-480B-A35B-Instruct is Qwen's most agentic code model, featuring significant performance on Agentic Coding, Agentic Browser-Use and other foundational coding tasks, achieving results comparable to Claude Sonnet.

- **Max output tokens:** 66.5K
- **Cached input cost:** $0.02/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen3-coder

### alibaba/qwen3-coder-next

Qwen3-Coder-Next is an open-weight language model designed specifically for coding agents. With only 3B activated parameters (80B total), it achieves performance comparable to models with 10–20x more active parameters, making it highly cost-effective for production agent deployment. Through an elaborate training recipe, Qwen3-Coder-Next excels at long-horizon reasoning, complex tool usage, and recovery from execution failures, ensuring robust performance in dynamic coding tasks.

- **Max output tokens:** 256K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen3-coder-next

### alibaba/qwen3-coder-plus

Powered by Qwen3 this is a powerful Coding Agent that excels in tool calling and environment interaction to achieve autonomous programming. It combines outstanding coding proficiency with versatile general-purpose abilities.

- **Max output tokens:** 65.5K
- **Cached input cost:** $0.20/1M
- **Details:** https://vercel.com/ai-gateway/models/qwen3-coder-plus

### alibaba/qwen3-embedding-0.6b

The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B).

- **Max output tokens:** 32.8K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen3-embedding-0.6b

### alibaba/qwen3-embedding-4b

The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B).

- **Max output tokens:** 32.8K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen3-embedding-4b

### alibaba/qwen3-embedding-8b

The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B).

- **Max output tokens:** 32.8K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen3-embedding-8b

### alibaba/qwen3-max

The Qwen 3 series Max model has undergone specialized upgrades in agent programming and tool invocation compared to the preview version. The officially released model this time has achieved state-of-the-art (SOTA) performance in its field and is better suited to meet the demands of agents operating in more complex scenarios.

- **Max output tokens:** 65.5K
- **Cached input cost:** $0.24/1M
- **Details:** https://vercel.com/ai-gateway/models/qwen3-max

### alibaba/qwen3-max-preview

Qwen3-Max-Preview shows substantial gains over the 2.5 series in overall capability, with significant enhancements in Chinese-English text understanding, complex instruction following, handling of subjective open-ended tasks, multilingual ability, and tool invocation; model knowledge hallucinations are reduced.

- **Max output tokens:** 32.8K
- **Cached input cost:** $0.24/1M
- **Details:** https://vercel.com/ai-gateway/models/qwen3-max-preview

### alibaba/qwen3-next-80b-a3b-instruct

A new generation of open-source, non-thinking mode model powered by Qwen3. This version demonstrates superior Chinese text understanding, augmented logical reasoning, and enhanced capabilities in text generation tasks over the previous iteration (Qwen3-235B-A22B-Instruct-2507).

- **Max output tokens:** 65.5K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen3-next-80b-a3b-instruct

### alibaba/qwen3-next-80b-a3b-thinking

A new generation of Qwen3-based open-source thinking mode models. This version offers improved instruction following and streamlined summary responses over the previous iteration (Qwen3-235B-A22B-Thinking-2507).

- **Max output tokens:** 65.5K
- **Details:** https://vercel.com/ai-gateway/models/qwen3-next-80b-a3b-thinking

### alibaba/qwen3-vl-instruct

The Qwen3 series VL models has been comprehensively upgraded in areas such as visual coding and spatial perception. Its visual perception and recognition capabilities have significantly improved, supporting the understanding of ultra-long videos, and its OCR functionality has undergone a major enhancement.

- **Max output tokens:** 262.1K
- **Cached input cost:** $0.11/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen3-vl-instruct

### alibaba/qwen3-vl-thinking

Qwen3 series VL models feature significantly enhanced multimodal reasoning capabilities, with a particular focus on optimizing the model for STEM and mathematical reasoning. Visual perception and recognition abilities have been comprehensively improved, and OCR capabilities have undergone a major upgrade.

- **Max output tokens:** 32.8K
- **Details:** https://vercel.com/ai-gateway/models/qwen3-vl-thinking

### alibaba/qwen-3-14b

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support

- **Max output tokens:** 16.4K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen-3-14b

### alibaba/qwen-3-30b

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support

- **Max output tokens:** 16.4K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/qwen-3-30b

### alibaba/wan-v2.5-t2v-preview



- **Details:** https://vercel.com/ai-gateway/models/wan-v2.5-t2v-preview

### alibaba/wan-v2.6-i2v



- **Details:** https://vercel.com/ai-gateway/models/wan-v2.6-i2v

### alibaba/wan-v2.6-i2v-flash



- **Details:** https://vercel.com/ai-gateway/models/wan-v2.6-i2v-flash

### alibaba/wan-v2.6-r2v



- **Details:** https://vercel.com/ai-gateway/models/wan-v2.6-r2v

### alibaba/wan-v2.6-r2v-flash



- **Details:** https://vercel.com/ai-gateway/models/wan-v2.6-r2v-flash

### alibaba/wan-v2.6-t2v



- **Details:** https://vercel.com/ai-gateway/models/wan-v2.6-t2v

### amazon/nova-2-lite

Nova 2 Lite is a fast, cost-effective reasoning model for everyday workloads that can process text, images, and videos to generate text.

- **Max output tokens:** 1M
- **Cached input cost:** $0.07/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/nova-2-lite

### amazon/nova-lite

A very low cost multimodal model that is lightning fast for processing image, video, and text inputs.

- **Max output tokens:** 8.2K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/nova-lite

### amazon/nova-micro

A text-only model that delivers the lowest latency responses at very low cost.

- **Max output tokens:** 8.2K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/nova-micro

### amazon/nova-pro

A highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks.

- **Max output tokens:** 8.2K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/nova-pro

### amazon/titan-embed-text-v2

Amazon Titan Text Embeddings V2 is a light weight, efficient multilingual embedding model supporting 1024, 512, and 256 dimensions.

- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/titan-embed-text-v2

### anthropic/claude-3-haiku

Claude 3 Haiku is Anthropic's fastest model yet, designed for enterprise workloads which often involve longer prompts. Haiku to quickly analyze large volumes of documents, such as quarterly filings, contracts, or legal cases, for half the cost of other models in its performance tier.

- **Max output tokens:** 4.1K
- **Cached input cost:** $0.03/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/claude-3-haiku

### anthropic/claude-3.5-haiku

Claude 3 Haiku is Anthropic's fastest, most compact model for near-instant responsiveness. It answers simple queries and requests with speed. Customers will be able to build seamless AI experiences that mimic human interactions. Claude 3 Haiku can process images and return text outputs, and features a 200K context window.

- **Max output tokens:** 8.2K
- **Cached input cost:** $0.08/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/claude-3.5-haiku

### anthropic/claude-3.7-sonnet

Claude 3.7 Sonnet is Anthropic's most intelligent model to date and the first Claude model to offer extended thinking—the ability to solve complex problems with careful, step-by-step reasoning. Anthropic is the first AI lab to introduce a single model where users can balance speed and quality by choosing between standard thinking for near-instant responses or extended thinking or advanced reasoning. Claude 3.7 Sonnet is state-of-the-art for coding, and delivers advancements in computer use, agentic capabilities, complex reasoning, and content generation. With frontier performance and more control over speed, Claude 3.7 Sonnet is the ideal choice for powering AI agents, especially customer-facing agents, and complex AI workflows.

- **Max output tokens:** 8.2K
- **Cached input cost:** $0.30/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/claude-3.7-sonnet

### anthropic/claude-haiku-4.5

Claude Haiku 4.5 matches Sonnet 4's performance on coding, computer use, and agent tasks at substantially lower cost and faster speeds. It delivers near-frontier performance and Claude’s unique character at a price point that works for scaled sub-agent deployments, free tier products, and intelligence-sensitive applications with budget constraints.

- **Max output tokens:** 64K
- **Cached input cost:** $0.10/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/claude-haiku-4.5

### anthropic/claude-opus-4

Claude Opus 4 is Anthropic's most powerful model yet and the best coding model in the world, leading on SWE-bench (72.5%) and Terminal-bench (43.2%). It delivers sustained performance on long-running tasks that require focused effort and thousands of steps, with the ability to work continuously for several hours—dramatically outperforming all Sonnet models and significantly expanding what AI agents can accomplish.

- **Max output tokens:** 32K
- **Cached input cost:** $1.50/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/claude-opus-4

### anthropic/claude-opus-4.1

Claude Opus 4.1 is a drop-in replacement for Opus 4 that delivers superior performance and precision for real-world coding and agentic tasks. Opus 4.1 advances state-of-the-art coding performance to 74.5% on SWE-bench Verified, and handles complex, multi-step problems with more rigor and attention to detail.

- **Max output tokens:** 32K
- **Cached input cost:** $1.50/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/claude-opus-4.1

### anthropic/claude-opus-4.5

Claude Opus 4.5 is Anthropic’s latest model in the Opus series, meant for demanding reasoning tasks and complex problem solving. This model has improvements in general intelligence and vision compared to previous iterations. In addition, it is suited for difficult coding tasks and agentic workflows, especially those with computer use and tool use, and can effectively handle context usage and external memory files.

- **Max output tokens:** 64K
- **Cached input cost:** $0.50/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/claude-opus-4.5

### anthropic/claude-opus-4.6

Opus 4.6 is the world’s best model for coding and professional work, built to power agents that take on whole categories of real-world work. It excels across the entire SDLC, breaking through on hard problems, identifying complex bugs, and demonstrating deeper codebase understanding. It also delivers a step-change in knowledge work, with near-production-ready documents, presentations, and spreadsheets on the first pass.

- **Max output tokens:** 128K
- **Cached input cost:** $0.50/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/claude-opus-4.6

### anthropic/claude-sonnet-4

Claude Sonnet 4 significantly improves on Sonnet 3.7's industry-leading capabilities, excelling in coding with a state-of-the-art 72.7% on SWE-bench. The model balances performance and efficiency for internal and external use cases, with enhanced steerability for greater control over implementations. While not matching Opus 4 in most domains, it delivers an optimal mix of capability and practicality.

- **Max output tokens:** 64K
- **Cached input cost:** $0.30/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/claude-sonnet-4

### anthropic/claude-sonnet-4.5

Claude Sonnet 4.5 is the newest model in the Sonnet series, offering improvements and updates over Sonnet 4.

- **Max output tokens:** 64K
- **Cached input cost:** $0.30/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/claude-sonnet-4.5

### anthropic/claude-sonnet-4.6

Claude Sonnet 4.6 is the most capable Sonnet-class model yet, with frontier performance across coding, agents, and professional work. It excels at iterative development, complex codebase navigation, end-to-end project management with memory, polished document creation, and confident computer use for web QA and workflow automation.

- **Max output tokens:** 128K
- **Cached input cost:** $0.30/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/claude-sonnet-4.6

### arcee-ai/trinity-large-preview

Trinity Large (Preview) is a 400B-parameter (13B active) sparse mixture-of-experts language model, engineered to scale model capacity while maintaining inference efficiency over long contexts, with strong performance in reasoning-heavy workloads including math, coding-related tasks, and multi-step agent workflows.

- **Max output tokens:** 131K
- **Details:** https://vercel.com/ai-gateway/models/trinity-large-preview

### arcee-ai/trinity-large-thinking

Trinity-Large-Thinking is a reasoning-optimized variant of Arcee AI's Trinity-Large family — a 398B-parameter sparse Mixture-of-Experts (MoE) model with approximately 13B active parameters per token. Built on Trinity-Large-Base and post-trained with extended chain-of-thought reasoning and agentic RL, Trinity-Large-Thinking delivers state-of-the-art performance on agentic benchmarks while maintaining strong general capabilities.

- **Max output tokens:** 80K
- **Details:** https://vercel.com/ai-gateway/models/trinity-large-thinking

### arcee-ai/trinity-mini

Trinity Mini is a 26B-parameter (3B active) sparse mixture-of-experts language model, engineered for efficient inference over long contexts with robust function calling and multi-step agent workflows.

- **Max output tokens:** 131.1K
- **Details:** https://vercel.com/ai-gateway/models/trinity-mini

### bfl/flux-pro-1.0-fill

A state-of-the-art inpainting model, enabling editing and expansion of real and generated images given a text description and a binary mask.
This provider gives the option to change the moderation level for inputs and outputs. The control is under safety tolerance and is by default 2 on a range from 0 (more strict) through 6 (more permissive).

- **Image cost:** $0.05/1M
- **Details:** https://vercel.com/ai-gateway/models/flux-pro-1.0-fill

### bfl/flux-kontext-max

FLUX.1 Kontext creates images from text prompts with unique capabilities for character consistency and advanced editing. It also edits images using simple text prompts. No complex workflows or fine-tuning needed.
This provider gives the option to change the moderation level for inputs and outputs. The control is under safety tolerance and is by default 2 on a range from 0 (more strict) through 6 (more permissive).

- **Image cost:** $0.08/1M
- **Details:** https://vercel.com/ai-gateway/models/flux-kontext-max

### bfl/flux-kontext-pro

FLUX.1 Kontext creates images from text prompts with unique capabilities for character consistency and advanced editing. It also edits images using simple text prompts. No complex workflows or fine-tuning needed.
This provider gives the option to change the moderation level for inputs and outputs. The control is under safety tolerance and is by default 2 on a range from 0 (more strict) through 6 (more permissive).

- **Image cost:** $0.04/1M
- **Details:** https://vercel.com/ai-gateway/models/flux-kontext-pro

### bfl/flux-2-flex

FLUX.2 is a completely new base model trained for visual intelligence, not just pixel generation, setting a new standard for both image generation and image editing. With FLUX.2 models you can expect the highest quality, higher resolutions (up to 4MP), and new capabilities like multi-ref images. FLUX.2 [flex] supports customizable image generation and editing with adjustable steps and guidance. It's better at typography and text rendering. It supports up to 10 reference images (up to 14 MP total input).
This provider gives the option to change the moderation level for inputs and outputs. The control is under safety tolerance and is by default 2 on a range from 0 (more strict) through 6 (more permissive).

- **Details:** https://vercel.com/ai-gateway/models/flux-2-flex

### bfl/flux-2-klein-4b

FLUX.2 [klein] is Black Forest Labs' fastest image model yet - it unifies image generation and editing in a single, compact model. Delivering state-of-the-art quality with end-to-end inference in less than a second. Enabling interactive workflows, real-time previews, and latency-critical applications.

- **Details:** https://vercel.com/ai-gateway/models/flux-2-klein-4b

### bfl/flux-2-klein-9b

FLUX.2 [klein] is Black Forest Labs' fastest image model yet - it unifies image generation and editing in a single, compact model. Delivering state-of-the-art quality with end-to-end inference in less than a second. Enabling interactive workflows, real-time previews, and latency-critical applications.

- **Details:** https://vercel.com/ai-gateway/models/flux-2-klein-9b

### bfl/flux-2-max

FLUX.2 [max] offers image generation and image editing with the highest quality available. It delivers state-of-the-art image generation and advanced image editing with exceptional realism, precision, and consistency. Built for professional use, FLUX.2 [max] produces production-ready outputs for marketing teams, creatives, filmmakers, and creators around the world.

- **Max output tokens:** 67.3K
- **Details:** https://vercel.com/ai-gateway/models/flux-2-max

### bfl/flux-2-pro

FLUX.2 is a completely new base model trained for visual intelligence, not just pixel generation, setting a new standard for both image generation and image editing. With FLUX.2 models you can expect the highest quality, higher resolutions (up to 4MP), and new capabilities like multi-ref images. FLUX.2 [pro] supports generation, editing, and multiple reference images (up to 9 MP total input).
This provider gives the option to change the moderation level for inputs and outputs. The control is under safety tolerance and is by default 2 on a range from 0 (more strict) through 6 (more permissive).

- **Max output tokens:** 67.3K
- **Details:** https://vercel.com/ai-gateway/models/flux-2-pro

### bfl/flux-pro-1.1

FLUX1.1 [pro] is the standard for text-to-image generation with fast, reliable and consistently stunning results.
This provider gives the option to change the moderation level for inputs and outputs. The control is under safety tolerance and is by default 2 on a range from 0 (more strict) through 6 (more permissive).

- **Image cost:** $0.04/1M
- **Details:** https://vercel.com/ai-gateway/models/flux-pro-1.1

### bfl/flux-pro-1.1-ultra

FLUX1.1 [pro] Ultra delivers ultra-fast, ultra high-resolution image creation - with more pixels in every picture. Generate varying aspect ratios from text, at 4MP resolution fast.
This provider gives the option to change the moderation level for inputs and outputs. The control is under safety tolerance and is by default 2 on a range from 0 (more strict) through 6 (more permissive).

- **Image cost:** $0.06/1M
- **Details:** https://vercel.com/ai-gateway/models/flux-pro-1.1-ultra

### bytedance/seed-1.8

Bytedance Seed 1.8 features stronger multimodal understanding and agent capabilities. The model delivers superior performance across a wide range of complex real-world tasks, helping enterprises create greater value.

- **Max output tokens:** 64K
- **Cached input cost:** $0.05/1M
- **Details:** https://vercel.com/ai-gateway/models/seed-1.8

### bytedance/seed-1.6

ByteDance's new multimodal deep-thinking model, supporting both text and visual inputs with enhanced reasoning capabilities.

- **Max output tokens:** 32K
- **Cached input cost:** $0.05/1M
- **Details:** https://vercel.com/ai-gateway/models/seed-1.6

### bytedance/seedance-v1.0-lite-i2v

Generates videos from image/text description, first and last frames, reference images. Balances quality&speed. Strong semantic understanding, professional camera work. Supports 480p/720p/1080p, 3-12s

- **Details:** https://vercel.com/ai-gateway/models/seedance-v1.0-lite-i2v

### bytedance/seedance-v1.0-lite-t2v

Generates videos from text description. Balances quality & speed. Strong semantic understanding, professional camera work, diverse styles. Supports 480p/720p/1080p, 3-12s

- **Details:** https://vercel.com/ai-gateway/models/seedance-v1.0-lite-t2v

### bytedance/seedance-v1.0-pro

A video generation model that supports multi-shot storytelling. It excels in semantic understanding and instruction following, producing smooth, detailed, and cinematic 1080P HD videos.

- **Details:** https://vercel.com/ai-gateway/models/seedance-v1.0-pro

### bytedance/seedance-v1.0-pro-fast

Seedance 1.0 Pro Fast delivers top performance at an unbeatable price, balancing quality, speed, and cost. Built on Seedance 1.0 Pro’s core strengths, it’s faster and more cost-efficient for creators.

- **Details:** https://vercel.com/ai-gateway/models/seedance-v1.0-pro-fast

### bytedance/seedance-v1.5-pro

ByteDance's Seedance 1.5 Pro is a professional video model using V2A native generation for integrated, synced audio-visual output, enhancing efficiency of professional video creation.

- **Details:** https://vercel.com/ai-gateway/models/seedance-v1.5-pro

### bytedance/seedream-4.0

Seedream 4.0 is a SOTA multimodal image creation model built on leading architecture. It breaks through the boundaries of traditional text-to-image models by natively supporting text, single-image, and multi-image inputs. Users can freely combine text and images to achieve diverse creative modes within a single model—such as multi-image blending, image editing, and sequentially batch image generation, featuring subject consistency, making image creation more free and controllable.

- **Image cost:** $0.03/1M
- **Details:** https://vercel.com/ai-gateway/models/seedream-4.0

### bytedance/seedream-4.5

Seedream 4.5 is the latest in-house image generation model developed by ByteDance. Compared with Seedream 4.0, it delivers comprehensive improvements—especially in editing consistency, including better preservation of subject details, lighting, and color tone. It also enhances portrait refinement and small-text rendering. The model’s multi-image composition capabilities have been significantly strengthened, and both reasoning performance and visual aesthetics continue to advance, enabling more accurate and artistically expressive image generation.

- **Image cost:** $0.04/1M
- **Details:** https://vercel.com/ai-gateway/models/seedream-4.5

### bytedance/seedream-5.0-lite

ByteDance-Seedream-5.0-lite is the latest image generation model released by BytePlus. For the first time, it introduces web-connected retrieval, enabling the model to fuse real-time online information to significantly improve the timeliness and relevance of generated images. The model’s reasoning and comprehension capabilities are further upgraded, allowing it to accurately interpret complex prompts and visual inputs. In addition, ByteDance-Seedream-5.0-lite delivers notable improvements in global knowledge coverage, reference consistency, and professional-grade scene generation, making it well suited for enterprise-level visual creation workflows.

- **Image cost:** $0.04/1M
- **Details:** https://vercel.com/ai-gateway/models/seedream-5.0-lite

### cohere/rerank-v3.5

A model that allows for re-ranking English Language documents and semi-structured data (JSON).

- **Max output tokens:** 4.1K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/rerank-v3.5

### cohere/rerank-v4-fast

A light version of Rerank 4 Pro, this is a multilingual model that allows for re-ranking English and non-english documents and semi-structured data (JSON). This model is better suited for low latency and high throughput use-cases than its pro variant.

- **Max output tokens:** 32K
- **Details:** https://vercel.com/ai-gateway/models/rerank-v4-fast

### cohere/rerank-v4-pro

A multilingual model that allows for re-ranking English and non-english documents and semi-structured data (JSON). This model is better suited for state-of-the-art quality and complex use-cases than its fast variant.

- **Max output tokens:** 32K
- **Details:** https://vercel.com/ai-gateway/models/rerank-v4-pro

### cohere/command-a

Command A is Cohere's most performant model to date, excelling at tool use, agents, retrieval augmented generation (RAG), and multilingual use cases. Command A has a context length of 256K, only requires two GPUs to run, and has 150% higher throughput compared to Command R+ 08-2024.

- **Max output tokens:** 8K
- **Details:** https://vercel.com/ai-gateway/models/command-a

### cohere/embed-v4.0

A model that allows for text, images, or mixed content to be classified or turned into embeddings.

- **Details:** https://vercel.com/ai-gateway/models/embed-v4.0

### deepseek/deepseek-r1

The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528.

- **Max output tokens:** 16.4K
- **Cached input cost:** $0.35/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/deepseek-r1

### deepseek/deepseek-v3

Fast general-purpose LLM with enhanced reasoning capabilities

- **Max output tokens:** 163.8K
- **Cached input cost:** $0.14/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/deepseek-v3

### deepseek/deepseek-v3.1

DeepSeek-V3.1 is post-trained on the top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint through a two-phase long context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased 10-fold to 630B tokens, while the 128K extension phase has been extended by 3.3x to 209B tokens. Additionally, DeepSeek-V3.1 is trained using the UE8M0 FP8 scale data format to ensure compatibility with microscaling data formats.

- **Max output tokens:** 163.8K
- **Cached input cost:** $0.13/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/deepseek-v3.1

### deepseek/deepseek-v3.1-terminus

DeepSeek-V3.1-Terminus delivers more stable & reliable outputs across benchmarks compared to the previous version and addresses user feedback (i.e. language consistency and agent upgrades).

- **Max output tokens:** 65.5K
- **Cached input cost:** $0.14/1M
- **Details:** https://vercel.com/ai-gateway/models/deepseek-v3.1-terminus

### deepseek/deepseek-v3.2

DeepSeek-V3.2: Official successor to V3.2-Exp.

- **Max output tokens:** 65.5K
- **Cached input cost:** $0.03/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/deepseek-v3.2

### deepseek/deepseek-v3.2-thinking

Thinking mode of DeepSeek V3.2

- **Max output tokens:** 64K
- **Cached input cost:** $0.03/1M
- **Details:** https://vercel.com/ai-gateway/models/deepseek-v3.2-thinking

### google/gemini-2.0-flash

Gemini 2.0 Flash delivers next-gen features and improved capabilities, including superior speed, built-in tool use, multimodal generation, and a 1M token context window.

- **Max output tokens:** 8.2K
- **Cached input cost:** $0.03/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-2.0-flash

### google/gemini-2.0-flash-lite

Gemini 2.0 Flash delivers next-gen features and improved capabilities, including superior speed, built-in tool use, multimodal generation, and a 1M token context window.

- **Max output tokens:** 8.2K
- **Cached input cost:** $0.02/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-2.0-flash-lite

### google/gemini-2.5-flash

Gemini 2.5 Flash is a thinking model that offers great, well-rounded capabilities. It is designed to offer a balance between price and performance with multimodal support and a 1M token context window.

- **Max output tokens:** 65.5K
- **Cached input cost:** $0.03/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-2.5-flash

### google/gemini-2.5-flash-lite

Gemini 2.5 Flash-Lite is a balanced, low-latency model with configurable thinking budgets and tool connectivity (e.g., Google Search grounding and code execution). It supports multimodal input and offers a 1M-token context window.

- **Max output tokens:** 65.5K
- **Cached input cost:** $0.01/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-2.5-flash-lite

### google/gemini-2.5-flash-lite-preview-09-2025

Gemini 2.5 Flash-Lite is a balanced, low-latency model with configurable thinking budgets and tool connectivity (e.g., Google Search grounding and code execution). It supports multimodal input and offers a 1M-token context window.

- **Max output tokens:** 65.5K
- **Cached input cost:** $0.01/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-2.5-flash-lite-preview-09-2025

### google/gemini-2.5-flash-preview-09-2025

Gemini 2.5 Flash is a thinking model that offers great, well-rounded capabilities. It is designed to offer a balance between price and performance with multimodal support and a 1M token context window.

- **Max output tokens:** 65.5K
- **Cached input cost:** $0.03/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-2.5-flash-preview-09-2025

### google/gemini-2.5-pro

Gemini 2.5 Pro is our most advanced reasoning Gemini model, capable of solving complex problems. Gemini 2.5 Pro can comprehend vast datasets and challenging problems from different information sources, including text, audio, images, video, and even entire code repositories.

- **Max output tokens:** 65.5K
- **Cached input cost:** $0.13/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-2.5-pro

### google/gemini-3-flash

Google's most intelligent model built for speed, combining frontier intelligence with superior search and grounding.

- **Max output tokens:** 65K
- **Cached input cost:** $0.05/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-3-flash

### google/gemini-3-pro-preview

This model improves upon Gemini 2.5 Pro and is catered towards challenging tasks, especially those involving complex reasoning or agentic workflows. Improvements highlighted include use cases for coding, multi-step function calling, planning, reasoning, deep knowledge tasks, and instruction following.

- **Max output tokens:** 64K
- **Cached input cost:** $0.20/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-3-pro-preview

### google/gemini-3.1-flash-image-preview

Gemini 3.1 Flash Image (Nano Banana 2) is optimized for image understanding and generation and offers a balance of price and performance.

- **Max output tokens:** 32.8K
- **Cached input cost:** $0.05/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-3.1-flash-image-preview

### google/gemini-3.1-flash-lite-preview

Gemini 3.1 Flash Lite Preview outperforms 2.5 Flash Lite on overall quality and lands close to 2.5 Flash performance across key capability areas. It is a workhorse model for high-volume use cases, with improvements across audio input/ASR, RAG snippet ranking, translation, data extraction, and code completion.

- **Max output tokens:** 65K
- **Cached input cost:** $0.03/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-3.1-flash-lite-preview

### google/gemini-3.1-pro-preview

Improved SWE and agentic capabilities, improved token efficiency and thinking, and expanded thinking levels.

- **Max output tokens:** 64K
- **Cached input cost:** $0.20/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-3.1-pro-preview

### google/gemini-embedding-001

State-of-the-art embedding model with excellent performance across English, multilingual and code tasks.

- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-embedding-001

### google/gemini-embedding-2

Google’s first fully multimodal Embedding model that is capable of mapping text, image, video, audio, and PDFs and their interleaved combinations thereof into a single, unified vector space. Built on the Gemini architecture, it supports 100+ languages.

- **Details:** https://vercel.com/ai-gateway/models/gemini-embedding-2

### google/gemma-4-26b-a4b-it

Gemma is a family of open models built by Google DeepMind. Gemma 4 models are multimodal, handling text and image input (with audio supported on small models) and generating text output. This release includes open-weights models in both pre-trained and instruction-tuned variants. Gemma 4 features a context window of up to 256K tokens and maintains multilingual support in over 140 languages.

- **Max output tokens:** 131.1K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gemma-4-26b-a4b-it

### google/gemma-4-31b-it

Gemma 4 31B is engineered to tackle the most demanding enterprise workloads and complex reasoning tasks. With an expansive 256K-token context window, the 31B model can effortlessly ingest entire codebases, and massive sets of images in a single prompt.

- **Max output tokens:** 131.1K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gemma-4-31b-it

### google/imagen-4.0-generate-001

Imagen 4: Google's flagship text-to-image model that serves as the go-to choice for a wide variety of high-quality image generation tasks, featuring significant improvements in text rendering over previous models. It now supports up to 2K resolution generation for creating detailed and crisp visuals, making it suitable for everything from marketing assets to artistic compositions.

- **Image cost:** $0.04/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/imagen-4.0-generate-001

### google/imagen-4.0-fast-generate-001

Imagen 4 Fast is Google’s speed-optimized variant of the Imagen 4 text-to-image model, designed for rapid, high-volume image generation. It’s ideal for workflows like quick drafts, mockups, and iterative creative exploration. Despite emphasizing speed, it still benefits from the broader Imagen 4 family’s improvements in clarity, text rendering, and stylistic flexibility, and supports high-resolution outputs up to 2K.

- **Image cost:** $0.02/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/imagen-4.0-fast-generate-001

### google/imagen-4.0-ultra-generate-001

Imagen 4 Ultra: Highest quality image generation model for detailed and photorealistic outputs.

- **Image cost:** $0.06/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/imagen-4.0-ultra-generate-001

### google/gemini-2.5-flash-image

Nano Banana (Gemini 2.5 Flash Image) is Google's first fully hybrid reasoning model, letting developers turn thinking on or off and set thinking budgets to balance quality, cost, and latency. Upgraded for rapid creative workflows, it can generate interleaved text and images and supports conversational, multi‑turn image editing in natural language. It’s also locale‑aware, enabling culturally and linguistically appropriate image generation for audiences worldwide.

- **Max output tokens:** 65.5K
- **Cached input cost:** $0.03/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-2.5-flash-image

### google/gemini-3-pro-image

Nano Banana Pro (Gemini 3 Pro Image) builds on Nano Banana's generation capabilities into a new era of studio-quality, functional design to help you create and edit high-fidelity, production-ready visuals with unparalleled precision and control. Improvements include enhanced world knowledge and reasoning, dynamic text and translation, and studio level controls.

- **Max output tokens:** 32.8K
- **Cached input cost:** $0.20/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gemini-3-pro-image

### google/text-embedding-005

English-focused text embedding model optimized for code and English language tasks.

- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/text-embedding-005

### google/text-multilingual-embedding-002

Multilingual text embedding model optimized for cross-lingual tasks across many languages.

- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/text-multilingual-embedding-002

### google/veo-3.0-generate-001

Veo 3 is designed to handle a range of video generation tasks, from cinematic narratives to dynamic character animations. With Veo 3, you can create more immersive experiences by not only generating stunning visuals, but also audio like dialogue and sound effects.

- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/veo-3.0-generate-001

### google/veo-3.0-fast-generate-001

Veo 3 Fast is a quicker and more cost effective version of Veo 3, allowing developers to create videos with sound while maintaining high quality and optimizing for speed and business use cases. Veo 3 Fast offers both text-to-video and image-to-video modalities.

- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/veo-3.0-fast-generate-001

### google/veo-3.1-generate-001

Veo 3.1 is Google's state-of-the-art model for generating high-fidelity, 8-second 720p, 1080p or 4k videos featuring stunning realism and natively generated audio.

- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/veo-3.1-generate-001

### google/veo-3.1-fast-generate-001

Veo 3.1 Fast is a specialized, high-speed variant of Google DeepMind’s Veo 3.1 text-to-video model, optimized for rapid generation of 8-second, high-fidelity videos. It is designed to create cinematic, 1080p, or 720p content with improved prompt adherence and native audio, making it ideal for creating quick, high-quality video clips, social media content, and ad creatives.

- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/veo-3.1-fast-generate-001

### inception/mercury-2

A diffusion-based reasoning LLM that generates text via parallel refinement (not token-by-token), delivering real-time latency with ~1k tokens/sec plus 128K context and built-in tool/JSON support.

- **Max output tokens:** 128K
- **Cached input cost:** $0.03/1M
- **Details:** https://vercel.com/ai-gateway/models/mercury-2

### inception/mercury-coder-small

Mercury Coder Small is ideal for code generation, debugging, and refactoring tasks with minimal latency.

- **Max output tokens:** 16.4K
- **Details:** https://vercel.com/ai-gateway/models/mercury-coder-small

### klingai/kling-v2.5-turbo-i2v

Kling 2.5 Turbo is a major update to the AI video generation model focused on significantly improving speed, video quality, temporal stability, and creative control for creators, making professional-grade AI-generated video faster, more coherent, and easier to direct from text prompts.

- **Details:** https://vercel.com/ai-gateway/models/kling-v2.5-turbo-i2v

### klingai/kling-v2.5-turbo-t2v

Kling 2.5 Turbo is a major update to the AI video generation model focused on significantly improving speed, video quality, temporal stability, and creative control for creators, making professional-grade AI-generated video faster, more coherent, and easier to direct from text prompts.

- **Details:** https://vercel.com/ai-gateway/models/kling-v2.5-turbo-t2v

### klingai/kling-v2.6-i2v

Kling 2.6 introduces a groundbreaking "Native Audio" capability, enabling the generation of complete videos in a single go, including natural voice, action sound effects, and environmental ambient sounds, providing an immersive "what you see if what you hear" experience. 

- **Details:** https://vercel.com/ai-gateway/models/kling-v2.6-i2v

### klingai/kling-v2.6-motion-control

Kling 2.6 introduces a groundbreaking "Native Audio" capability, enabling the generation of complete videos in a single go, including natural voice, action sound effects, and environmental ambient sounds, providing an immersive "what you see if what you hear" experience. 

- **Details:** https://vercel.com/ai-gateway/models/kling-v2.6-motion-control

### klingai/kling-v2.6-t2v

Kling 2.6 introduces a groundbreaking "Native Audio" capability, enabling the generation of complete videos in a single go, including natural voice, action sound effects, and environmental ambient sounds, providing an immersive "what you see if what you hear" experience. 

- **Details:** https://vercel.com/ai-gateway/models/kling-v2.6-t2v

### klingai/kling-v3.0-i2v

Build upon an All-in-One product framework, the Kling 3.0 model series supports full multimodal input and output spanning text, images, audio, and video, bringing the understanding, generation, and editing of video together in one streamlined AI workflow. The models integrate multiple tasks, including text-to-video, image-to-video, reference-to-video, and in-video editing, into a single, native multimodal architecture, enabling the models to follow complex narrative logic, deliver precise shot control, and maintain strong prompt adherence.

- **Details:** https://vercel.com/ai-gateway/models/kling-v3.0-i2v

### klingai/kling-v3.0-t2v

Build upon an All-in-One product framework, the Kling 3.0 model series supports full multimodal input and output spanning text, images, audio, and video, bringing the understanding, generation, and editing of video together in one streamlined AI workflow. The models integrate multiple tasks, including text-to-video, image-to-video, reference-to-video, and in-video editing, into a single, native multimodal architecture, enabling the models to follow complex narrative logic, deliver precise shot control, and maintain strong prompt adherence.

- **Details:** https://vercel.com/ai-gateway/models/kling-v3.0-t2v

### kwaipilot/kat-coder-pro-v2

A high-performance edition designed for complex enterprise projects and SaaS integration.

- **Max output tokens:** 256K
- **Cached input cost:** $0.06/1M
- **Details:** https://vercel.com/ai-gateway/models/kat-coder-pro-v2

### kwaipilot/kat-coder-pro-v1

KAT-Coder-Pro V1 is KwaiKAT's most advanced agentic coding model in the KwaiKAT series. Designed specifically for agentic coding tasks, it excels in real-world software engineering scenarios, achieving a remarkable 73.4% solve rate on the SWE-Bench Verified benchmark. KAT-Coder-Pro V1 delivers top-tier coding performance and has been rigorously tested by thousands of in-house engineers. The model has been optimized for tool-use capability, multi-turn interaction, instruction following, generalization and comprehensive capabilities through a multi-stage training process, including mid-training, supervised fine-tuning (SFT), reinforcement fine-tuning (RFT), and scalable agentic RL.

- **Max output tokens:** 32K
- **Cached input cost:** $0.06/1M
- **Details:** https://vercel.com/ai-gateway/models/kat-coder-pro-v1

### meituan/longcat-flash-chat

LongCat-Flash-Chat is a high-throughput MoE chat model (128k context) optimized for agentic tasks.

- **Max output tokens:** 100K
- **Details:** https://vercel.com/ai-gateway/models/longcat-flash-chat

### meituan/longcat-flash-thinking-2601

A version built for deep and general agentic thinking.

- **Max output tokens:** 32.8K
- **Details:** https://vercel.com/ai-gateway/models/longcat-flash-thinking-2601

### meta/llama-3.1-70b

An update to Meta Llama 3 70B Instruct that includes an expanded 128K context length, multilinguality and improved reasoning capabilities.

- **Max output tokens:** 16.4K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/llama-3.1-70b

### meta/llama-3.1-8b

Llama 3.1 8B brings powerful performance in a smaller, more efficient package. With improved multilingual support, tool use, and a 128K context length, it enables sophisticated use cases like interactive agents and compact coding assistants while remaining lightweight and accessible.

- **Max output tokens:** 131.1K
- **Cached input cost:** $0.10/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/llama-3.1-8b

### meta/llama-3.2-11b

Instruction-tuned image reasoning generative model (text + images in / text out) optimized for visual recognition, image reasoning, captioning and answering general questions about the image.

- **Max output tokens:** 8.2K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/llama-3.2-11b

### meta/llama-3.2-1b

Text-only model, supporting on-device use cases such as multilingual local knowledge retrieval, summarization, and rewriting.

- **Max output tokens:** 8.2K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/llama-3.2-1b

### meta/llama-3.2-3b

Text-only model, fine-tuned for supporting on-device use cases such as multilingual local knowledge retrieval, summarization, and rewriting.

- **Max output tokens:** 8.2K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/llama-3.2-3b

### meta/llama-3.2-90b

Instruction-tuned image reasoning generative model (text + images in / text out) optimized for visual recognition, image reasoning, captioning and answering general questions about the image.

- **Max output tokens:** 8.2K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/llama-3.2-90b

### meta/llama-3.3-70b

Where performance meets efficiency. This model supports high-performance conversational AI designed for content creation, enterprise applications, and research, offering advanced language understanding capabilities, including text summarization, classification, sentiment analysis, and code generation.

- **Max output tokens:** 32.8K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/llama-3.3-70b

### meta/llama-4-maverick

The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding. Llama 4 Maverick, a 17 billion parameter model with 128 experts. Served by DeepInfra.

- **Max output tokens:** 8.2K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/llama-4-maverick

### meta/llama-4-scout

The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding. Llama 4 Scout, a 17 billion parameter model with 16 experts. Served by DeepInfra.

- **Max output tokens:** 8.2K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/llama-4-scout

### minimax/minimax-m2

MiniMax-M2 redefines efficiency for agents. It is a compact, fast, and cost-effective MoE model (230 billion total parameters with 10 billion active parameters) built for elite performance in coding and agentic tasks, all while maintaining powerful general intelligence.

- **Max output tokens:** 205K
- **Cached input cost:** $0.03/1M
- **Details:** https://vercel.com/ai-gateway/models/minimax-m2

### minimax/minimax-m2.1

MiniMax 2.1 is MiniMax's latest model, optimized specifically for robustness in coding, tool use, instruction following, and long-horizon planning.

- **Max output tokens:** 200K
- **Cached input cost:** $0.03/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/minimax-m2.1

### minimax/minimax-m2.1-lightning

MiniMax-M2.1-lightning is a faster version of MiniMax-M2.1, offering the same performance but with significantly higher throughput (output speed ~100 TPS, MiniMax-M2 output speed ~60 TPS).

- **Max output tokens:** 131.1K
- **Cached input cost:** $0.03/1M
- **Details:** https://vercel.com/ai-gateway/models/minimax-m2.1-lightning

### minimax/minimax-m2.5

MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. It is capable of handling the entire development process of various complex systems. It covers full-stack projects across multiple platforms including Web, Android, iOS, Windows, and Mac, encompassing server-side APIs, functional logic, and databases.

- **Max output tokens:** 196K
- **Cached input cost:** $0.03/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/minimax-m2.5

### minimax/minimax-m2.5-highspeed

M2.5 highspeed: Same performance, faster and more agile (output speed approximately 100 tps)

- **Max output tokens:** 131K
- **Cached input cost:** $0.03/1M
- **Details:** https://vercel.com/ai-gateway/models/minimax-m2.5-highspeed

### minimax/minimax-m2.7

M2.7 delivers outstanding performance in real-world software engineering, including end-to-end full project delivery, log analysis and bug troubleshooting, code security, machine learning, and more.

- **Max output tokens:** 131K
- **Cached input cost:** $0.06/1M
- **Details:** https://vercel.com/ai-gateway/models/minimax-m2.7

### minimax/minimax-m2.7-highspeed

M2.7 Highspeed: Same performance, faster and more agile (output speed approximately 100 tps)

- **Max output tokens:** 131.1K
- **Cached input cost:** $0.06/1M
- **Details:** https://vercel.com/ai-gateway/models/minimax-m2.7-highspeed

### mistral/codestral-embed

Code embedding model that can embed code databases and repositories to power coding assistants.

- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/codestral-embed

### mistral/devstral-2

An enterprise-grade text model that excels at using tools to explore codebases, editing multiple files, and powering software engineering agents.

- **Max output tokens:** 256K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/devstral-2

### mistral/devstral-small

Devstral is an agentic LLM for software engineering tasks built under a collaboration between Mistral AI and All Hands AI 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents.

- **Max output tokens:** 64K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/devstral-small

### mistral/devstral-small-2

Our open source model that excels at using tools to explore codebases, editing multiple files, and powering software engineering agents.

- **Max output tokens:** 256K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/devstral-small-2

### mistral/magistral-medium

Complex thinking, backed by deep understanding, with transparent reasoning you can follow and verify. The model excels in maintaining high-fidelity reasoning across numerous languages, even when switching between languages mid-task.

- **Max output tokens:** 64K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/magistral-medium

### mistral/magistral-small

Complex thinking, backed by deep understanding, with transparent reasoning you can follow and verify. The model excels in maintaining high-fidelity reasoning across numerous languages, even when switching between languages mid-task.

- **Max output tokens:** 64K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/magistral-small

### mistral/ministral-14b

Ministral 3 14B is the largest model in the Ministral 3 family, offering state-of-the-art capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart. Optimized for local deployment, it delivers high performance across diverse hardware, including local setups.

- **Max output tokens:** 256K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/ministral-14b

### mistral/ministral-3b

A compact, efficient model for on-device tasks like smart assistants and local analytics, offering low-latency performance.

- **Max output tokens:** 4K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/ministral-3b

### mistral/ministral-8b

A more powerful model with faster, memory-efficient inference, ideal for complex workflows and demanding edge applications.

- **Max output tokens:** 4K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/ministral-8b

### mistral/codestral

Mistral's cutting-edge language model for coding released end of July 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation.

- **Max output tokens:** 4K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/codestral

### mistral/mistral-embed

General-purpose text embedding model for semantic search, similarity, clustering, and RAG workflows.

- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/mistral-embed

### mistral/mistral-large-3

Mistral Large 3 2512 is Mistral’s most capable model to date. It has a sparse mixture-of-experts architecture with 41B active parameters (675B total).

- **Max output tokens:** 256K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/mistral-large-3

### mistral/mistral-medium

Mistral Medium 3 delivers frontier performance while being an order of magnitude less expensive. For instance, the model performs at or above 90% of Claude Sonnet 3.7 on benchmarks across the board at a significantly lower cost.

- **Max output tokens:** 64K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/mistral-medium

### mistral/mistral-nemo

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It supports function calling and is released under the Apache 2.0 license.

- **Max output tokens:** 131.1K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/mistral-nemo

### mistral/mistral-small

Mistral Small is the ideal choice for simple tasks that one can do in bulk - like Classification, Customer Support, or Text Generation. It offers excellent performance at an affordable price point.

- **Max output tokens:** 4K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/mistral-small

### mistral/mixtral-8x22b-instruct

8x22b Instruct model. 8x22b is mixture-of-experts open source model by Mistral served by Fireworks.

- **Max output tokens:** 2.0K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/mixtral-8x22b-instruct

### mistral/pixtral-12b

A 12B model with image understanding capabilities in addition to text.

- **Max output tokens:** 4K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/pixtral-12b

### mistral/pixtral-large

Pixtral Large is the second model in our multimodal family and demonstrates frontier-level image understanding. Particularly, the model is able to understand documents, charts and natural images, while maintaining the leading text-only understanding of Mistral Large 2.

- **Max output tokens:** 4K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/pixtral-large

### moonshotai/kimi-k2

Kimi K2 is a model with a context length of 128k, featuring powerful code and Agent capabilities based on MoE architecture. It has 1T total parameters with 32B activated parameters. In benchmark performance tests across major categories including general knowledge reasoning, programming, mathematics, and Agent capabilities, the K2 model outperforms other mainstream open-source models.

- **Max output tokens:** 131.1K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/kimi-k2

### moonshotai/kimi-k2-0905

Kimi K2 0905 is an updated version of Kimi K2, a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Kimi K2 0905 has improved coding abilities, a longer context window, and agentic tool use, and a longer (262K) context window.

- **Max output tokens:** 128K
- **Cached input cost:** $0.30/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/kimi-k2-0905

### moonshotai/kimi-k2-thinking

Kimi K2 Thinking is an advanced open-source thinking model by Moonshot AI. It can execute up to 200 – 300 sequential tool calls without human interference, reasoning coherently across hundreds of steps to solve complex problems. Built as a thinking agent, it reasons step by step while using tools, achieving state-of-the-art performance on Humanity's Last Exam (HLE), BrowseComp, and other benchmarks, with major gains in reasoning, agentic search, coding, writing, and general capabilities.

- **Max output tokens:** 262.1K
- **Cached input cost:** $0.15/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/kimi-k2-thinking

### moonshotai/kimi-k2-thinking-turbo

High-speed version of kimi-k2-thinking, suitable for scenarios requiring both deep reasoning and extremely fast responses

- **Max output tokens:** 262.1K
- **Cached input cost:** $0.15/1M
- **Details:** https://vercel.com/ai-gateway/models/kimi-k2-thinking-turbo

### moonshotai/kimi-k2-turbo

Kimi K2 Turbo is the high-speed version of kimi-k2, with the same model parameters as kimi-k2, but the output speed is increased to 60 tokens per second, with a maximum of 100 tokens per second, the context length is 256k

- **Max output tokens:** 16.4K
- **Cached input cost:** $0.15/1M
- **Details:** https://vercel.com/ai-gateway/models/kimi-k2-turbo

### moonshotai/kimi-k2.5

kimi-k2.5 is Kimi's most versatile model to date, featuring a native multimodal architecture that supports both visual and text input, thinking and non-thinking modes, and dialogue and agent tasks.

- **Max output tokens:** 262.1K
- **Cached input cost:** $0.10/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/kimi-k2.5

### morph/morph-v3-fast

Morph offers a specialized AI model that applies code changes suggested by frontier models (like Claude or GPT-4o) to your existing code files FAST - 4500+ tokens/second. It acts as the final step in the AI coding workflow. Supports 16k input tokens and 16k output tokens.

- **Max output tokens:** 16.4K
- **Details:** https://vercel.com/ai-gateway/models/morph-v3-fast

### morph/morph-v3-large

Morph offers a specialized AI model that applies code changes suggested by frontier models (like Claude or GPT-4o) to your existing code files FAST - 2500+ tokens/second. It acts as the final step in the AI coding workflow. Supports 16k input tokens and 16k output tokens.

- **Max output tokens:** 16.4K
- **Details:** https://vercel.com/ai-gateway/models/morph-v3-large

### nvidia/nemotron-3-nano-30b-a3b

NVIDIA Nemotron 3 Nano is an open reasoning model optimized for fast, cost-efficient inference. Built with a hybrid MoE and Mamba architecture and trained on NVIDIA-curated synthetic reasoning data, it delivers strong multi-step reasoning with stable latency and predictable performance for agentic and production workloads.

- **Max output tokens:** 262.1K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/nemotron-3-nano-30b-a3b

### nvidia/nemotron-3-super-120b-a12b

NVIDIA Nemotron 3 Super is a 120B-parameter open hybrid MoE model, activating just 12B parameters for maximum compute efficiency and accuracy in complex multi-agent applications. It delivers up to 7x higher throughput, providing fast, cost-efficient inference for agentic tasks. Additionally, a long context window gives the model long-term memory, preventing AI agents from losing focus on long, multi-step tasks and ensuring high-accuracy results. Fully open with weights, datasets, and recipes, Super allows easy customization and secure deployment anywhere.

- **Max output tokens:** 32K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/nemotron-3-super-120b-a12b

### nvidia/nemotron-nano-12b-v2-vl

The model is an auto-regressive vision language model that uses an optimized transformer architecture. The model enables multi-image reasoning and video understanding, along with strong document intelligence, visual Q&A and summarization capabilities.

- **Max output tokens:** 131.1K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/nemotron-nano-12b-v2-vl

### nvidia/nemotron-nano-9b-v2

NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be controlled via a system prompt. If the user prefers the model to provide its final answer without intermediate reasoning traces, it can be configured to do so.\

- **Max output tokens:** 131.1K
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/nemotron-nano-9b-v2

### openai/gpt-4o-mini-search-preview

GPT-4o mini Search Preview is a specialized model trained to understand and execute web search queries with the Chat Completions API. In addition to token fees, web search queries have a fee per tool call.

- **Max output tokens:** 16.4K
- **Details:** https://vercel.com/ai-gateway/models/gpt-4o-mini-search-preview

### openai/gpt-5-chat

GPT-5 Chat points to the GPT-5 snapshot currently used in ChatGPT.

- **Max output tokens:** 16.4K
- **Cached input cost:** $0.13/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5-chat

### openai/gpt-5.1-codex-max

GPT‑5.1-Codex-Max is purpose-built for agentic coding.

- **Max output tokens:** 128K
- **Cached input cost:** $0.13/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.1-codex-max

### openai/gpt-5.1-codex-mini

GPT-5.1 Codex mini is a smaller, faster, and cheaper version of GPT-5.1 Codex.

- **Max output tokens:** 128K
- **Cached input cost:** $0.03/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.1-codex-mini

### openai/gpt-5.1-thinking

An upgraded version of GPT-5 that adapts thinking time more precisely to the question to spend more time on complex questions and respond more quickly to simpler tasks.

- **Max output tokens:** 128K
- **Cached input cost:** $0.13/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.1-thinking

### openai/gpt-5.2

GPT-5.2 is OpenAI's best general-purpose model, part of the GPT-5 flagship model family. It's their most intelligent model yet for both general and agentic tasks.

- **Max output tokens:** 128K
- **Cached input cost:** $0.17/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.2

### openai/gpt-5.2-pro

Version of GPT-5.2 that produces smarter and more precise responses.

- **Max output tokens:** 128K
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.2-pro

### openai/gpt-5.2-chat

The model powering ChatGPT is gpt-5.2-chat-latest: this is OpenAI's best general-purpose model, part of the GPT-5 flagship model family.

- **Max output tokens:** 16.4K
- **Cached input cost:** $0.17/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.2-chat

### openai/gpt-5.2-codex

GPT‑5.2-Codex is a version of GPT‑5.2⁠ further optimized for agentic coding in Codex, including improvements on long-horizon work through context compaction, stronger performance on large code changes like refactors and migrations, improved performance in Windows environments, and significantly stronger cybersecurity capabilities.

- **Max output tokens:** 128K
- **Cached input cost:** $0.17/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.2-codex

### openai/gpt-5.3-codex

GPT-5.3-Codex advances both the frontier coding performance of GPT‑5.2-Codex and the reasoning and professional knowledge capabilities of GPT‑5.2, together in one model, which is also 25% faster. This enables it to take on long-running tasks that involve research, tool use, and complex execution.

- **Max output tokens:** 128K
- **Cached input cost:** $0.17/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.3-codex

### openai/gpt-5.4

GPT-5.4 is OpenAI's best general-purpose model, part of the GPT-5 flagship model family. It's their most intelligent model yet for both general and agentic tasks.

- **Max output tokens:** 128K
- **Cached input cost:** $0.25/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.4

### openai/gpt-5.4-mini

GPT-5.4 Mini brings the strengths of GPT-5.4 to a faster, more efficient model designed for high-volume workloads.

- **Max output tokens:** 128K
- **Cached input cost:** $0.07/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.4-mini

### openai/gpt-5.4-nano

GPT-5.4 Nano is designed for tasks where speed and cost matter most like classification, data extraction, ranking, and sub-agents.

- **Max output tokens:** 128K
- **Cached input cost:** $0.02/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.4-nano

### openai/gpt-5.4-pro

GPT-5.4 Pro uses more compute to think harder and provide consistently better answers. It's designed to tackle tough problems.

- **Max output tokens:** 128K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.4-pro

### openai/gpt-image-1

GPT Image 1 is OpenAI's new state-of-the-art image generation model. It is a natively multimodal language model that accepts both text and image inputs, and produces image outputs.

- **Cached input cost:** $1.25/1M
- **Details:** https://vercel.com/ai-gateway/models/gpt-image-1

### openai/gpt-image-1-mini

A cost-efficient version of GPT Image 1. It is a natively multimodal language model that accepts both text and image inputs, and produces image outputs.

- **Cached input cost:** $0.20/1M
- **Details:** https://vercel.com/ai-gateway/models/gpt-image-1-mini

### openai/gpt-image-1.5

GPT Image 1.5 is OpenAI's latest image generation model, with better instruction following and adherence to prompts.

- **Cached input cost:** $1.25/1M
- **Details:** https://vercel.com/ai-gateway/models/gpt-image-1.5

### openai/gpt-3.5-turbo

OpenAI's most capable and cost effective model in the GPT-3.5 family optimized for chat purposes, but also works well for traditional completions tasks.

- **Max output tokens:** 4.1K
- **Details:** https://vercel.com/ai-gateway/models/gpt-3.5-turbo

### openai/gpt-3.5-turbo-instruct

Similar capabilities as GPT-3 era models. Compatible with legacy Completions endpoint and not Chat Completions.

- **Max output tokens:** 4.1K
- **Details:** https://vercel.com/ai-gateway/models/gpt-3.5-turbo-instruct

### openai/gpt-4-turbo

gpt-4-turbo from OpenAI has broad general knowledge and domain expertise allowing it to follow complex instructions in natural language and solve difficult problems accurately. It has a knowledge cutoff of April 2023 and a 128,000 token context window.

- **Max output tokens:** 4.1K
- **Details:** https://vercel.com/ai-gateway/models/gpt-4-turbo

### openai/gpt-4.1

GPT 4.1 is OpenAI's flagship model for complex tasks. It is well suited for problem solving across domains.

- **Max output tokens:** 32.8K
- **Cached input cost:** $0.50/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-4.1

### openai/gpt-4.1-mini

GPT 4.1 mini provides a balance between intelligence, speed, and cost that makes it an attractive model for many use cases.

- **Max output tokens:** 32.8K
- **Cached input cost:** $0.10/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-4.1-mini

### openai/gpt-4.1-nano

GPT-4.1 nano is the fastest, most cost-effective GPT 4.1 model.

- **Max output tokens:** 32.8K
- **Cached input cost:** $0.03/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-4.1-nano

### openai/gpt-4o

GPT-4o from OpenAI has broad general knowledge and domain expertise allowing it to follow complex instructions in natural language and solve difficult problems accurately. It matches GPT-4 Turbo performance with a faster and cheaper API.

- **Max output tokens:** 16.4K
- **Cached input cost:** $1.25/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-4o

### openai/gpt-4o-mini

GPT-4o mini from OpenAI is their most advanced and cost-efficient small model. It is multi-modal (accepting text or image inputs and outputting text) and has higher intelligence than gpt-3.5-turbo but is just as fast.

- **Max output tokens:** 16.4K
- **Cached input cost:** $0.07/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-4o-mini

### openai/gpt-5

GPT-5 is OpenAI's flagship language model that excels at complex reasoning, broad real-world knowledge, code-intensive, and multi-step agentic tasks.

- **Max output tokens:** 128K
- **Cached input cost:** $0.13/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5

### openai/gpt-5-mini

GPT-5 mini is a cost optimized model that excels at reasoning/chat tasks. It offers an optimal balance between speed, cost, and capability.

- **Max output tokens:** 128K
- **Cached input cost:** $0.03/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5-mini

### openai/gpt-5-nano

GPT-5 nano is a high throughput model that excels at simple instruction or classification tasks.

- **Max output tokens:** 128K
- **Cached input cost:** $0.01/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5-nano

### openai/gpt-5-pro

GPT-5 pro uses more compute to think harder and provide consistently better answers. Since GPT-5 pro is designed to tackle tough problems, some requests may take several minutes to finish.

- **Max output tokens:** 272K
- **Details:** https://vercel.com/ai-gateway/models/gpt-5-pro

### openai/gpt-5-codex

GPT-5-Codex is a version of GPT-5 optimized for agentic coding tasks in Codex or similar environments.

- **Max output tokens:** 128K
- **Cached input cost:** $0.13/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5-codex

### openai/gpt-5.1-instant

GPT-5.1 Instant (or GPT-5.1 chat) is a warmer and more conversational version of GPT-5-chat, with improved instruction following and adaptive reasoning for deciding when to think before responding.

- **Max output tokens:** 16.4K
- **Cached input cost:** $0.13/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.1-instant

### openai/gpt-5.1-codex

GPT-5.1-Codex is a version of GPT-5.1 optimized for agentic coding tasks in Codex or similar environments.

- **Max output tokens:** 128K
- **Cached input cost:** $0.13/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.1-codex

### openai/gpt-5.3-chat

The model powering ChatGPT is gpt-5.3-chat-latest: this is OpenAI's best general-purpose model, part of the GPT-5 flagship model family.

- **Max output tokens:** 16.4K
- **Cached input cost:** $0.17/1M
- **Details:** https://vercel.com/ai-gateway/models/gpt-5.3-chat

### openai/gpt-oss-120b

Extremely capable general-purpose LLM with strong, controllable reasoning capabilities

- **Max output tokens:** 131.1K
- **Cached input cost:** $0.25/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-oss-120b

### openai/gpt-oss-20b

A compact, open-weight language model optimized for low-latency and resource-constrained environments, including local and edge deployments

- **Max output tokens:** 128K
- **Cached input cost:** $0.04/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-oss-20b

### openai/gpt-oss-safeguard-20b

OpenAI's first open weight reasoning model specifically trained for safety classification tasks. Fine-tuned from GPT-OSS, this model helps classify text content based on customizable policies, enabling bring-your-own-policy Trust & Safety AI where your own taxonomy, definitions, and thresholds guide classification decisions.

- **Max output tokens:** 65.5K
- **Cached input cost:** $0.04/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/gpt-oss-safeguard-20b

### openai/o1

o1 is OpenAI's flagship reasoning model, designed for complex problems that require deep thinking. It provides strong reasoning capabilities with improved accuracy for complex multi-step tasks.

- **Max output tokens:** 100K
- **Cached input cost:** $7.50/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/o1

### openai/o3

OpenAI's o3 is their most powerful reasoning model, setting new state-of-the-art benchmarks in coding, math, science, and visual perception. It excels at complex queries requiring multi-faceted analysis, with particular strength in analyzing images, charts, and graphics.

- **Max output tokens:** 100K
- **Cached input cost:** $0.50/1M
- **Details:** https://vercel.com/ai-gateway/models/o3

### openai/o3-pro

The o-series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o3-pro model uses more compute to think harder and provide consistently better answers.

- **Max output tokens:** 100K
- **Details:** https://vercel.com/ai-gateway/models/o3-pro

### openai/o3-deep-research

o3-deep-research is OpenAI's most advanced model for deep research, designed to tackle complex, multi-step research tasks. It can search and synthesize information from across the internet as well as from your own data—brought in through MCP connectors.

- **Max output tokens:** 100K
- **Cached input cost:** $2.50/1M
- **Details:** https://vercel.com/ai-gateway/models/o3-deep-research

### openai/o3-mini

o3-mini is OpenAI's most recent small reasoning model, providing high intelligence at the same cost and latency targets of o1-mini.

- **Max output tokens:** 100K
- **Cached input cost:** $0.55/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/o3-mini

### openai/o4-mini

OpenAI's o4-mini delivers fast, cost-efficient reasoning with exceptional performance for its size, particularly excelling in math (best-performing on AIME benchmarks), coding, and visual tasks.

- **Max output tokens:** 100K
- **Cached input cost:** $0.28/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/o4-mini

### openai/text-embedding-3-large

OpenAI's most capable embedding model for both english and non-english tasks.

- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/text-embedding-3-large

### openai/text-embedding-3-small

OpenAI's improved, more performant version of their ada embedding model.

- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/text-embedding-3-small

### openai/text-embedding-ada-002

OpenAI's legacy text embedding model.

- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/text-embedding-ada-002

### perplexity/sonar

Perplexity's lightweight offering with search grounding, quicker and cheaper than Sonar Pro.

- **Max output tokens:** 8K
- **Details:** https://vercel.com/ai-gateway/models/sonar

### perplexity/sonar-pro

Perplexity's premier offering with search grounding, supporting advanced queries and follow-ups.

- **Max output tokens:** 8K
- **Details:** https://vercel.com/ai-gateway/models/sonar-pro

### perplexity/sonar-reasoning-pro

A premium reasoning-focused model that outputs Chain of Thought (CoT) in responses, providing comprehensive explanations with enhanced search capabilities and multiple search queries per request.

- **Max output tokens:** 8K
- **Details:** https://vercel.com/ai-gateway/models/sonar-reasoning-pro

### prime-intellect/intellect-3

Introducing INTELLECT-3: Scaling RL to a 100B+ MoE model on our end-to-end stack.

Achieving state-of-the-art performance for its size across math, code and reasoning.

- **Max output tokens:** 131.1K
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/intellect-3

### prodia/flux-fast-schnell

Lightning-fast image generation

- **Details:** https://vercel.com/ai-gateway/models/flux-fast-schnell

### recraft/recraft-v2

Recraft V2 is an image generation model released in March 2024 and the first model trained from scratch by Recraft. With 20 billion parameters, it was a breakthrough in human anatomical accuracy and the first to support brand consistency and brand color inputs. It also introduced vector image generation (SVG output), as well as minimalistic icon and illustration styles.

- **Image cost:** $0.02/1M
- **Details:** https://vercel.com/ai-gateway/models/recraft-v2

### recraft/recraft-v3

V3 introduced major advances in photorealism and text rendering. It was the first Recraft model to generate mid-size text accurately and, as of 2025, is the only model capable of placing text at specific positions in an image.

- **Image cost:** $0.04/1M
- **Details:** https://vercel.com/ai-gateway/models/recraft-v3

### recraft/recraft-v4

The model delivers strong photorealism, including realistic skin rendering and natural textures, while avoiding common synthetic artifacts. It produces more distinctive lighting, composition, diverse subjects, contemporary styling, and carefully considered scene elements. For illustration, it generates original characters and forms with sophisticated and unexpected color combinations.

- **Image cost:** $0.04/1M
- **Details:** https://vercel.com/ai-gateway/models/recraft-v4

### recraft/recraft-v4-pro

The model delivers strong photorealism, including realistic skin rendering and natural textures, while avoiding common synthetic artifacts. It produces more distinctive lighting, composition, diverse subjects, contemporary styling, and carefully considered scene elements. For illustration, it generates original characters and forms with sophisticated and unexpected color combinations.

- **Image cost:** $0.25/1M
- **Details:** https://vercel.com/ai-gateway/models/recraft-v4-pro

### voyage/rerank-2.5

A generalist reranker optimized for quality with instruction-following and multilingual support.

- **Max output tokens:** 32K
- **Details:** https://vercel.com/ai-gateway/models/rerank-2.5

### voyage/rerank-2.5-lite

A generalist reranker optimized for both latency and quality with instruction-following and multilingual support.

- **Max output tokens:** 32K
- **Details:** https://vercel.com/ai-gateway/models/rerank-2.5-lite

### voyage/voyage-3-large

Voyage AI's embedding model with the best general-purpose and multilingual retrieval quality.

- **Details:** https://vercel.com/ai-gateway/models/voyage-3-large

### voyage/voyage-3.5

Voyage AI's embedding model optimized for general-purpose and multilingual retrieval quality.

- **Details:** https://vercel.com/ai-gateway/models/voyage-3.5

### voyage/voyage-3.5-lite

Voyage AI's embedding model optimized for latency and cost.

- **Details:** https://vercel.com/ai-gateway/models/voyage-3.5-lite

### voyage/voyage-4

Optimized for general-purpose and multilingual retrieval quality. All embeddings created with the 4 series are compatible with each other.

- **Details:** https://vercel.com/ai-gateway/models/voyage-4

### voyage/voyage-4-large

The best general-purpose and multilingual retrieval quality. All embeddings created with the 4 series are compatible with each other.

- **Details:** https://vercel.com/ai-gateway/models/voyage-4-large

### voyage/voyage-4-lite

Optimized for latency and cost. All embeddings created with the 4 series are compatible with each other.

- **Details:** https://vercel.com/ai-gateway/models/voyage-4-lite

### voyage/voyage-code-2

Voyage AI's embedding model optimized for code retrieval (17% better than alternatives). This is the previous generation of code embeddings models.

- **Details:** https://vercel.com/ai-gateway/models/voyage-code-2

### voyage/voyage-code-3

Voyage AI's embedding model optimized for code retrieval.

- **Details:** https://vercel.com/ai-gateway/models/voyage-code-3

### voyage/voyage-finance-2

Voyage AI's embedding model optimized for finance retrieval and RAG.

- **Details:** https://vercel.com/ai-gateway/models/voyage-finance-2

### voyage/voyage-law-2

Voyage AI's embedding model optimized for legal retrieval and RAG.

- **Details:** https://vercel.com/ai-gateway/models/voyage-law-2

### xai/grok-3

xAI's flagship model that excels at enterprise use cases like data extraction, coding, and text summarization. Possesses deep domain knowledge in finance, healthcare, law, and science.

- **Max output tokens:** 131.1K
- **Cached input cost:** $0.75/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-3

### xai/grok-3-fast

xAI's flagship model that excels at enterprise use cases like data extraction, coding, and text summarization. Possesses deep domain knowledge in finance, healthcare, law, and science. The fast model variant is served on faster infrastructure, offering response times that are significantly faster than the standard. The increased speed comes at a higher cost per output token.

- **Max output tokens:** 131.1K
- **Cached input cost:** $1.25/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-3-fast

### xai/grok-3-mini

xAI's lightweight model that thinks before responding. Great for simple or logic-based tasks that do not require deep domain knowledge. The raw thinking traces are accessible.

- **Max output tokens:** 131.1K
- **Cached input cost:** $0.07/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-3-mini

### xai/grok-3-mini-fast

xAI's lightweight model that thinks before responding. Great for simple or logic-based tasks that do not require deep domain knowledge. The raw thinking traces are accessible. The fast model variant is served on faster infrastructure, offering response times that are significantly faster than the standard. The increased speed comes at a higher cost per output token.

- **Max output tokens:** 131.1K
- **Details:** https://vercel.com/ai-gateway/models/grok-3-mini-fast

### xai/grok-4

xAI's latest and greatest flagship model, offering unparalleled performance in natural language, math and reasoning - the perfect jack of all trades.

- **Max output tokens:** 256K
- **Cached input cost:** $0.75/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-4

### xai/grok-4-fast-non-reasoning

Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. It comes in two flavors: non-reasoning and reasoning.

- **Max output tokens:** 256K
- **Cached input cost:** $0.05/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-4-fast-non-reasoning

### xai/grok-4-fast-reasoning

Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. It comes in two flavors: non-reasoning and reasoning.

- **Max output tokens:** 256K
- **Cached input cost:** $0.05/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-4-fast-reasoning

### xai/grok-4.1-fast-non-reasoning

Grok 4.1 Fast is xAI's best tool-calling model with a 2M context window. It reasons and completes agentic tasks accurately and rapidly, excelling at complex real-world use cases such as customer support and finance. To optimize for speed use this variant. Otherwise, use the reasoning version.

- **Max output tokens:** 30K
- **Cached input cost:** $0.05/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-4.1-fast-non-reasoning

### xai/grok-4.1-fast-reasoning

Grok 4.1 Fast is xAI's best tool-calling model with a 2M context window. It reasons and completes agentic tasks accurately and rapidly, excelling at complex real-world use cases such as customer support and finance. To optimize for maximal intelligence use this variant. Otherwise, use the non-reasoning version.

- **Max output tokens:** 30K
- **Cached input cost:** $0.05/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-4.1-fast-reasoning

### xai/grok-4.20-non-reasoning-beta

Grok 4.20 Beta is the newest flagship model from xAI with industry-leading speed and agentic tool calling capabilities. It combines the lowest hallucination rate on the market with strict prompt adherance, delivering consistently precise and truthful responses.

- **Max output tokens:** 2M
- **Cached input cost:** $0.20/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-4.20-non-reasoning-beta

### xai/grok-4.20-reasoning-beta

Grok 4.20 Beta is the newest flagship model from xAI with industry-leading speed and agentic tool calling capabilities. It combines the lowest hallucination rate on the market with strict prompt adherance, delivering consistently precise and truthful responses.

- **Max output tokens:** 2M
- **Cached input cost:** $0.20/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-4.20-reasoning-beta

### xai/grok-4.20-multi-agent-beta

Multiple agents collaborate in parallel to perform deep research tasks.

- **Max output tokens:** 2M
- **Cached input cost:** $0.20/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-4.20-multi-agent-beta

### xai/grok-4.20-multi-agent

Multiple agents collaborate in parallel to perform deep research tasks.

- **Max output tokens:** 2M
- **Cached input cost:** $0.20/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-4.20-multi-agent

### xai/grok-4.20-non-reasoning

Grok 4.20 Beta is the newest flagship model from xAI with industry-leading speed and agentic tool calling capabilities. It combines the lowest hallucination rate on the market with strict prompt adherence, delivering consistently precise and truthful responses.

- **Max output tokens:** 2M
- **Cached input cost:** $0.20/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-4.20-non-reasoning

### xai/grok-4.20-reasoning

Grok 4.20 Beta is the newest flagship model from xAI with industry-leading speed and agentic tool calling capabilities. It combines the lowest hallucination rate on the market with strict prompt adherence, delivering consistently precise and truthful responses.

- **Max output tokens:** 2M
- **Cached input cost:** $0.20/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-4.20-reasoning

### xai/grok-code-fast-1

xAI's latest coding model that offers fast agentic coding with a 256K context window.

- **Max output tokens:** 256K
- **Cached input cost:** $0.02/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-code-fast-1

### xai/grok-imagine-video

State-of-the-art video generation across quality, cost, and latency. Grok Imagine is x.AI's most powerful video-audio generative model yet. Bring an image to life, start from a simple text prompt, or even refine a complex cinematic sequence.

- **Details:** https://vercel.com/ai-gateway/models/grok-imagine-video

### xai/grok-imagine-image

Generate high-quality images from text prompts with xAI's imagine API.

- **Image cost:** $0.02/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-imagine-image

### xai/grok-imagine-image-pro

Generate high-quality images from text prompts with xAI's imagine API.

- **Image cost:** $0.07/1M
- **Details:** https://vercel.com/ai-gateway/models/grok-imagine-image-pro

### xiaomi/mimo-v2-flash

Xiaomi MiMo-V2-Flash is a proprietary MoE model developed by Xiaomi, designed for extreme inference efficiency with 309B total parameters (15B active). By incorporating an innovative Hybrid attention architecture and multi-layer MTP inference acceleration, it ranks among the top 2 global open-source models across multiple Agent benchmarks.

- **Max output tokens:** 32K
- **Cached input cost:** $0.02/1M
- **Details:** https://vercel.com/ai-gateway/models/mimo-v2-flash

### xiaomi/mimo-v2-pro

Xiaomi MiMo-V2-Pro is built for demanding real-world Agent workflows. It has over 1T total parameters, with 42B active parameters, uses an innovative hybrid attention architecture, and supports an ultra-long context window of up to 1M tokens.

- **Max output tokens:** 128K
- **Cached input cost:** $0.20/1M
- **Details:** https://vercel.com/ai-gateway/models/mimo-v2-pro

### zai/glm-4.5

GLM-4.5 and GLM-4.5-Air are our latest flagship models, purpose-built as foundational models for agent-oriented applications. Both leverage a Mixture-of-Experts (MoE) architecture. GLM-4.5 has a total parameter count of 355B with 32B active parameters per forward pass, while GLM-4.5-Air adopts a more streamlined design with 106B total parameters and 12B active parameters.

- **Max output tokens:** 131.1K
- **Cached input cost:** $0.11/1M
- **Details:** https://vercel.com/ai-gateway/models/glm-4.5

### zai/glm-4.5-air

GLM-4.5 and GLM-4.5-Air are our latest flagship models, purpose-built as foundational models for agent-oriented applications. Both leverage a Mixture-of-Experts (MoE) architecture. GLM-4.5 has a total parameter count of 355B with 32B active parameters per forward pass, while GLM-4.5-Air adopts a more streamlined design with 106B total parameters and 12B active parameters.

- **Max output tokens:** 96K
- **Cached input cost:** $0.03/1M
- **Details:** https://vercel.com/ai-gateway/models/glm-4.5-air

### zai/glm-4.5v

Built on the GLM-4.5-Air base model, GLM-4.5V inherits proven techniques from GLM-4.1V-Thinking while achieving effective scaling through a powerful 106B-parameter MoE architecture.

- **Max output tokens:** 16.4K
- **Cached input cost:** $0.11/1M
- **Details:** https://vercel.com/ai-gateway/models/glm-4.5v

### zai/glm-4.6

As the latest iteration in the GLM series, GLM-4.6 achieves comprehensive enhancements across multiple domains, including real-world coding, long-context processing, reasoning, searching, writing, and agentic applications.

- **Max output tokens:** 202.8K
- **Cached input cost:** $0.11/1M
- **Zero data retention:** available
- **Details:** https://vercel.com/ai-gateway/models/glm-4.6

### zai/glm-4.7

GLM-4.7 is Z.ai’s latest flagship model, with major upgrades focused on two key areas: stronger coding capabilities and more stable multi-step reasoning and execution.

- **Max output tokens:** 131.1K
- **Cached input cost:** $2.25/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/glm-4.7

### zai/glm-4.7-flash

GLM-4.7-Flash balances high performance with efficiency, making it the perfect lightweight deployment option. Beyond coding, it is also recommended for creative writing, translation, long-context tasks, and roleplay.

- **Max output tokens:** 131K
- **Cached input cost:** $0.01/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/glm-4.7-flash

### zai/glm-4.7-flashx

 GLM-4.7-Flash balances high performance with efficiency, making it the perfect lightweight deployment option. 

- **Max output tokens:** 128K
- **Cached input cost:** $0.01/1M
- **Details:** https://vercel.com/ai-gateway/models/glm-4.7-flashx

### zai/glm-5

GLM-5 is Zai’s new-generation flagship foundation model, designed for Agentic Engineering, capable of providing reliable productivity in complex system engineering and long-range Agent tasks. In terms of Coding and Agent capabilities, GLM-5 has achieved state-of-the-art (SOTA) performance in open source, with its usability in real programming scenarios approaching that of Claude Opus 4.5.

- **Max output tokens:** 131.1K
- **Cached input cost:** $0.16/1M
- **Zero data retention:** available
- **HIPAA compliant:** available
- **Details:** https://vercel.com/ai-gateway/models/glm-5

### zai/glm-5-turbo

GLM 5 Turbo is a foundation model deeply optimized for the OpenClaw scenario. It has been specifically optimized for the core requirements of OpenClaw tasks since the training phase, enhancing key capabilities such as tool invocation, command following, timed and persistent tasks, and long-chain execution.

- **Max output tokens:** 131.1K
- **Cached input cost:** $0.24/1M
- **Details:** https://vercel.com/ai-gateway/models/glm-5-turbo

### zai/glm-5.1

GLM-5.1 delivers a major leap in coding capability, with particularly significant gains in handling long-horizon tasks. Unlike previous models built around minute-level interactions, GLM-5.1 can work independently and continuously on a single task for more than 8 hours—autonomously planning, executing, and improving itself throughout the process—ultimately delivering complete, engineering-grade results.

- **Max output tokens:** 64K
- **Cached input cost:** $0.26/1M
- **Details:** https://vercel.com/ai-gateway/models/glm-5.1

### zai/glm-5v-turbo

GLM-5V-Turbo is Z.AI’s first multimodal coding foundation model, built for vision-based coding tasks. It can natively process multimodal inputs such as images, video, and text, while also excelling at long-horizon planning, complex coding, and action execution. Deeply optimized for agent workflows, it works seamlessly with agents such as Claude Code and OpenClaw to complete the full loop of “understand the environment → plan actions → execute tasks”.

- **Max output tokens:** 128K
- **Cached input cost:** $0.24/1M
- **Details:** https://vercel.com/ai-gateway/models/glm-5v-turbo

### zai/glm-4.6v

GLM-4.6V series are Z.ai’s iterations in a multimodal large language model. GLM-4.6V scales its context window to 128k tokens in training, and achieves SoTA performance in visual understanding among models of similar parameter scales.

- **Max output tokens:** 24K
- **Cached input cost:** $0.05/1M
- **Details:** https://vercel.com/ai-gateway/models/glm-4.6v

### zai/glm-4.6v-flash

For local deployment and low-latency applications. GLM-4.6V series are Z.ai’s iterations in a multimodal large language model. GLM-4.6V scales its context window to 128k tokens in training, and achieves SoTA performance in visual understanding among models of similar parameter scales.

- **Max output tokens:** 24K
- **Cached input cost:** $0.00/1M
- **Details:** https://vercel.com/ai-gateway/models/glm-4.6v-flash
