Model Fallbacks
You can configure model failover to specify backups that are tried in order if the primary model fails or is unavailable.
Use the models array in providerOptions.gateway to specify fallback models:
import { streamText } from 'ai';
export async function POST(request: Request) {
const { prompt } = await request.json();
const result = streamText({
model: 'openai/gpt-5.2', // Primary model
prompt,
providerOptions: {
gateway: {
models: ['anthropic/claude-sonnet-4.5', 'google/gemini-3-flash'], // Fallback models
},
},
});
return result.toUIMessageStreamResponse();
}In this example:
- The gateway first attempts the primary model (
openai/gpt-5.2) - If that fails, it tries
anthropic/claude-sonnet-4.5 - If that also fails, it tries
google/gemini-3-flash - The response comes from the first model that succeeds
You can use models together with order to control both model failover and provider preference:
import { streamText } from 'ai';
export async function POST(request: Request) {
const { prompt } = await request.json();
const result = streamText({
model: 'openai/gpt-5.2',
prompt,
providerOptions: {
gateway: {
models: ['openai/gpt-5-nano', 'anthropic/claude-sonnet-4.5'],
order: ['azure', 'openai'], // Provider preference for each model
},
},
});
return result.toUIMessageStreamResponse();
}This configuration:
- Tries
openai/gpt-5.2via Azure, then OpenAI - If both fail, tries
openai/gpt-5-nanovia Azure first, then OpenAI - If those fail, tries
anthropic/claude-sonnet-4.5via available providers
When processing a request with model fallbacks:
- The gateway routes the request to the primary model (the
modelparameter) - For each model, provider routing rules apply (using
orderoronlyif specified) - If all providers for a model fail, the gateway tries the next model in the
modelsarray - The response comes from the first successful model/provider combination
Failover happens automatically. To see which model and provider served your request, check the provider metadata.
Was this helpful?