Skip to content

Nemotron 3 Nano 30B A3B

Nemotron 3 Nano 30B A3B is a sparse hybrid Mamba-Transformer mixture-of-experts (MoE) model with 30B total parameters but only 3B active per token. It supports a context window of 262.1K tokens with throughput closer to a 3B dense model than a 30B one.

Reasoning
index.ts
import { streamText } from 'ai'
const result = streamText({
model: 'nvidia/nemotron-3-nano-30b-a3b',
prompt: 'Why is the sky blue?'
})

More models by NVIDIA

Model
Context
Latency
Throughput
Input
Output
Cache
Web Search
Per Query
Capabilities
Providers
ZDR
No Training
Release Date
256K
0.3s
157tps
$0.15/M$0.65/M
bedrock logo
03/18/2026
131K
0.2s
139tps
$0.06/M$0.23/M
bedrock logo
deepinfra logo
08/18/2025
131K
0.2s
$0.20/M$0.60/M
bedrock logo
deepinfra logo
12/01/2024