Nemotron 3 Nano 30B A3B
Nemotron 3 Nano 30B A3B is a sparse hybrid Mamba-Transformer mixture-of-experts (MoE) model with 30B total parameters but only 3B active per token. It supports a context window of 262.1K tokens with throughput closer to a 3B dense model than a 30B one.
import { streamText } from 'ai'
const result = streamText({ model: 'nvidia/nemotron-3-nano-30b-a3b', prompt: 'Why is the sky blue?'})Providers
Route requests across multiple providers. Copy a provider slug to set your preference. Visit the docs for more info. Using a provider means you agree to their terms, listed under Legal.
| Provider |
|---|