Skip to content

Qwen3 Next 80B A3B Thinking

alibaba/qwen3-next-80b-a3b-thinking

Qwen3 Next 80B A3B Thinking is a hybrid Transformer-Mamba reasoning model that combines 80 billion total parameters (3B active per token) with a dedicated thinking mode, achieving strong results on AIME25 while supporting ultra-long contexts of 131.1K tokens.

index.ts
import { streamText } from 'ai'
const result = streamText({
model: 'alibaba/qwen3-next-80b-a3b-thinking',
prompt: 'Why is the sky blue?'
})

What To Consider When Choosing a Provider

  • Zero Data Retention

    AI Gateway does not currently support Zero Data Retention for this model. See the documentation for models that support ZDR.

    Authentication

    AI Gateway authenticates requests using an API key or OIDC token. You do not need to manage provider credentials directly.

Because thinking-mode responses can exceed 32K output tokens for complex reasoning tasks, verify that your provider and application timeout settings accommodate extended generation times before deploying.

When to Use Qwen3 Next 80B A3B Thinking

Best For

  • Competitive mathematics and science:

    Rigorous reasoning problems where step-by-step derivation is required

  • Hard coding challenges:

    Competitive programming and algorithmic design that benefit from explicit problem decomposition before code generation

  • Cross-reference long-document analysis:

    Tasks that reason across 100K+ token inputs while maintaining structured thought

  • Tutoring and explanation systems:

    Applications where visible reasoning chains are pedagogically valuable

  • Auditable research workflows:

    Use cases where a transparent inference process allows human review of the model's logic

Consider Alternatives When

  • High-throughput instruction following:

    Use Qwen3-Next-80B-A3B-Instruct for short-to-medium tasks without reasoning overhead

  • Strict token budgets:

    Thinking traces add significant output volume and cost per request

  • Multimodal input required:

    This model is text-only; use a vision-language variant for images or video

  • Real-time latency requirements:

    Extended reasoning generation can't meet hard low-latency response targets

Conclusion

Qwen3 Next 80B A3B Thinking occupies a distinct space: an architecture built for long-context efficiency that is simultaneously dedicated exclusively to extended reasoning. Teams working on hard STEM problems, detailed code analysis, or any domain where a visible reasoning chain adds quality and auditability can use it without resorting to a fully dense trillion-parameter alternative.

FAQ

This variant is specialized for complex reasoning. By committing entirely to thinking mode, it avoids the quality compromises that come from training a single model to switch between reasoning and direct-answer behaviors.

The recommended budget is 32,768 tokens for typical queries and up to 81,920 tokens for complex mathematics or coding problems. These are recommendations; actual trace length is determined by the model based on problem complexity.

The Thinking variant outperforms the Instruct variant's 69.5% on AIME25, and also surpasses Qwen3-30B-A3B-Thinking-2507 and several proprietary reasoning models in Qwen's published comparisons on this benchmark. See https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&url=2840914_2&modelId=qwen3-next-80b-a3b-thinking for specific scores.

Yes. The linear-attention Gated DeltaNet layers allow the model to handle sequences that grow long during reasoning, prompt plus extended thinking trace, at sub-quadratic cost compared to full attention. This keeps generation efficient even for hard problems that trigger long traces.

The native context is 131.1K tokens, extensible to approximately one million tokens via YaRN rope scaling. This allows the model to reason over very long input documents alongside its own thinking trace.

The model outputs reasoning between <think> and </think> before the final answer. If the opening tag is missing, find the closing </think> token (see Qwen reference parsers) and split there into thinking content and final response.

Both models support extended reasoning, but they represent different architectural tradeoffs. Qwen3 Next 80B A3B Thinking uses a sparse hybrid architecture optimized for throughput on long sequences; Qwen3-Max-Thinking uses a trillion-parameter model with autonomous tool invocation. The right choice depends on whether autonomous search/code-execution or architecture-driven efficiency is more valuable for your workload.