Skip to content

Nemotron 3 Nano 30B A3B

Nemotron 3 Nano 30B A3B is a sparse hybrid Mamba-Transformer mixture-of-experts (MoE) model with 30B total parameters but only 3B active per token. It supports a context window of 262.1K tokens with throughput closer to a 3B dense model than a 30B one.

Reasoning
index.ts
import { streamText } from 'ai'
const result = streamText({
model: 'nvidia/nemotron-3-nano-30b-a3b',
prompt: 'Why is the sky blue?'
})