Skip to content
Vercel April 2026 security incident

Trinity Large Thinking

arcee-ai/trinity-large-thinking

Trinity Large Thinking is a reasoning-focused variant in Arcee AI's Trinity Large family: a 398B-parameter sparse mixture-of-experts model with about 13B active parameters per token, built on Trinity Large Base and emphasizing extended chain-of-thought reasoning.

ReasoningTool UseImplicit Caching
index.ts
import { streamText } from 'ai'
const result = streamText({
model: 'arcee-ai/trinity-large-thinking',
prompt: 'Why is the sky blue?'
})

What To Consider When Choosing a Provider

  • Zero Data Retention

    AI Gateway does not currently support Zero Data Retention for this model. See the documentation for models that support ZDR.

    Authentication

    AI Gateway authenticates requests using an API key or OIDC token. You do not need to manage provider credentials directly.

Reasoning traces add output tokens. Budget for longer completions, stream responses, and compare $0.25 per million input tokens and $0.9 per million output tokens to your cost model.

When to Use Trinity Large Thinking

Best For

  • Auditable enterprise workflows:

    Step-by-step reasoning you can log and review

  • Analytical error reduction:

    Tasks where visible intermediate steps lower error rates

  • Traceable code review:

    Debugging or refactoring where the model's steps run alongside your own review

  • Inspectable decision flows:

    Multi-step pipelines where each stage must be reviewable

Consider Alternatives When

  • Short single-turn replies:

    Trinity Large Preview answers faster with fewer tokens

  • Minimal output budget:

    Reasoning traces add tokens that may not fit your cost model

  • Cost-dominant workloads:

    Trinity Mini meets a lower price point when its quality bar is enough

Conclusion

Trinity Large Thinking adds trace-oriented, post-trained reasoning on top of Arcee AI's Trinity Large Base stack in AI Gateway. Choose it when auditable steps matter; choose Trinity Large Preview when you do not need that overhead.

FAQ

Thinking emits extended chain-of-thought reasoning; Preview does not emphasize trace output. Thinking runs as a 398B sparse MoE with about 13B active parameters per token. Preview is a 400B-parameter (13B active) MoE aimed at long-context reasoning workloads. Choose Thinking when you need explicit reasoning traces; choose Preview when you do not.

Yes. Intermediate steps count in the output, so expect higher token use than a short answer from the base preview model. Factor that into cost and latency planning.

When you need large-stack reasoning traces more than Mini's cost profile. Trinity Mini uses 26B total parameters with 3B active and fits high-volume, budget-sensitive inference. Trinity Large Thinking fits heavier reasoning and audit-style review, not minimal token use.

No. Use your AI Gateway API key or an OIDC token. You don't need a separate provider account.

Yes. Set model to arcee-ai/trinity-large-thinking in the AI SDK's streamText or generateText call. AI Gateway also exposes OpenAI Chat Completions, OpenAI Responses, Anthropic Messages, and OpenResponses-compatible interfaces.

It can help. The model can surface intermediate steps you log next to the final answer. You still own retention, access control, and policy for those logs.

Yes. Token usage, latency, and cost show in your AI Gateway dashboard for each request without extra instrumentation.