Mixtral MoE 8x22B Instruct
Mixtral MoE 8x22B Instruct is a sparse mixture-of-experts model with 141B total parameters and 39B active per forward pass, offering a context window of 65.5K tokens, native function calling, and Apache 2.0 licensing.
import { streamText } from 'ai'
const result = streamText({ model: 'mistral/mixtral-8x22b-instruct', prompt: 'Why is the sky blue?'})