1 min read
You can now access DeepSeek's latest models, DeepSeek V3.2 and DeepSeek V3.2 Speciale, via Vercel's AI Gateway with no other provider accounts required.
DeepSeek V3.2 supports combined thinking and tool use, handling agent-style operations (tool calls) in both reasoning and non-reasoning modes. DeepSeek V3.2 Speciale is optimized for maximal reasoning performance, and is suited for complex task use cases but requires higher token usage and does not support tool use.
To use the DeepSeek V3.2 models, set model to the following in the AI SDK:
Non-thinking:
deepseek/deepseek-v3.2Thinking:
deepseek/deepseek-v3.2-thinkingSpeciale:
deepseek/deepseek-v3.2-speciale
import { streamText } from 'ai';
const result = streamText({ model: 'deepseek/deepseek-v3.2-speciale', prompt: `Design a self-contained, step-by-step solution to a novel math–algorithm hybrid problem: prove correctness, derive complexity, and construct an optimal implementation for the general case.`,});AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.
Read the docs, view the AI Gateway model leaderboard, or use DeepSeek V3.2 models directly in our model playground.
AI Gateway: Track top AI models by usage
The AI Gateway model leaderboard ranks the most used models over time by total token volume across all traffic through the Gateway. Updates regularly.
View the leaderboard