Streaming responses from LLMs

Learn how to use the AI SDK to stream LLM responses.

DX Team
3 min read
Last updated November 19, 2025

AI providers can be slow when producing responses, but many make their responses available in chunks as they're processed. Streaming enables you to show users those chunks of data as they arrive rather than waiting for the full response, improving the perceived speed of AI-powered apps.

You can use Vercel's AI SDK to stream responses from LLMs and AI APIs. It reduces the boilerplate necessary for streaming responses from AI providers and allows you to change AI providers with a few lines of code, rather than rewriting your entire application.

This example demonstrates a function that sends a message to one of OpenAI's GPT models and streams the response:

Before you begin, ensure you're using Node.js 18 or later.

  1. Install the ai and @ai-sdk/openai packages:
terminal
pnpm install ai @ai-sdk/openai
  1. Copy an OpenAI API key in the .env.local file with name OPENAI_API_KEY. See the AI SDK docs for more information on how to do this
  2. Add the following code to your example
app/api/chat-example/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// This method must be named GET
export async function GET() {
// Make a request to OpenAI's API based on
// a placeholder prompt
const response = streamText({
model: openai('gpt-4o-mini'),
messages: [{ role: 'user', content: 'Say this is a test.' }],
});
// Respond with the stream
return response.toTextStreamResponse({
headers: {
'Content-Type': 'text/event-stream',
},
});
}
app/api/chat-example/route.js
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// This method must be named GET
export async function GET() {
// Make a request to OpenAI's API based on
// a placeholder prompt
const response = streamText({
model: openai('gpt-4o-mini'),
messages: [{ role: 'user', content: 'Say this is a test.' }],
});
// Respond with the stream
return response.toTextStreamResponse({
headers: {
'Content-Type': 'text/event-stream',
},
});
}
app/api/chat-example/route.ts
// Streaming Functions must be defined in an
// app directory, even if the rest of your app
// is in the pages directory.
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// This method must be named GET
export async function GET() {
// Make a request to OpenAI's API based on
// a placeholder prompt
const response = streamText({
model: openai('gpt-4o-mini'),
messages: [{ role: 'user', content: 'Say this is a test.' }],
});
// Respond with the stream
return response.toTextStreamResponse({
headers: {
'Content-Type': 'text/event-stream',
},
});
}
app/api/chat-example/route.js
// Streaming Functions must be defined in an
// app directory, even if the rest of your app
// is in the pages directory.
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// This method must be named GET
export async function GET() {
// Make a request to OpenAI's API based on
// a placeholder prompt
const response = streamText({
model: openai('gpt-4o-mini'),
messages: [{ role: 'user', content: 'Say this is a test.' }],
});
// Respond with the stream
return response.toTextStreamResponse({
headers: {
'Content-Type': 'text/event-stream',
},
});
}
api/chat-example.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// This method must be named GET
export async function GET() {
// Make a request to OpenAI's API based on
// a placeholder prompt
const response = streamText({
model: openai('gpt-4o-mini'),
messages: [{ role: 'user', content: 'Say this is a test.' }],
});
// Respond with the stream
return response.toTextStreamResponse({
headers: {
'Content-Type': 'text/event-stream',
},
});
}
api/chat-example.js
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// This method must be named GET
export async function GET() {
// Make a request to OpenAI's API based on
// a placeholder prompt
const response = streamText({
model: openai('gpt-4o-mini'),
messages: [{ role: 'user', content: 'Say this is a test.' }],
});
// Respond with the stream
return response.toTextStreamResponse({
headers: {
'Content-Type': 'text/event-stream',
},
});
}
  1. Build your app and visit localhost:3000/api/chat-example. You should see the text "This is a test." in the browser.

Was this helpful?

supported.