DeepSeek V3.2
DeepSeek V3.2 is the extended reasoning variant of DeepSeek-V3.2. Available on AI Gateway since December 1, 2025, it generates up to 163K tokens of chain-of-thought reasoning for complex analytical, scientific, and multi-step problem-solving tasks.
import { streamText } from 'ai'
const result = streamText({ model: 'deepseek/deepseek-v3.2-thinking', prompt: 'Why is the sky blue?'})Frequently Asked Questions
Does DeepSeek V3.2 support tool calling?
No. The Thinking variant is a pure reasoning engine without tool-use support. For tool calls alongside reasoning, use the standard DeepSeek-V3.2 model.
What is the maximum output token budget for DeepSeek V3.2?
Up to 163K tokens per response, compared to 8K for the standard V3.2 chat variant.
When would I use DeepSeek V3.2 over DeepSeek-R1?
Choose DeepSeek V3.2 for the V3.2 stack and reasoning output up to 163K tokens. DeepSeek-R1 is MIT-licensed. If license terms matter for your deployment, confirm the license for the model you pick.
Why does the output token budget matter for reasoning models?
Reasoning models generate a chain-of-thought trace before the final answer. Complex problems can require thousands of reasoning tokens. A budget of 163K tokens provides headroom for multi-step derivations that would exceed an 8K limit.
How do I access DeepSeek V3.2 through AI Gateway?
Use the model ID
deepseek/deepseek-v3.2-thinkingwith an AI Gateway API key or OIDC token. No separate DeepSeek platform account is required.