Streaming
Learn how to stream responses from Vercel Functions.AI providers can be slow when producing responses, but many make their responses available in chunks as they're processed. Streaming enables you to show users those chunks of data as they arrive rather than waiting for the full response, improving the perceived speed of AI-powered apps.
Vercel recommends using Vercel's AI SDK to stream responses from LLMs and AI APIs. It reduces the boilerplate necessary for streaming responses from AI providers and allows you to change AI providers with a few lines of code, rather than rewriting your entire application.
The following example shows how to send a message to one of OpenAI's models and streams:
- You should understand how to setup a Vercel Function. See the Functions quickstart for more information.
- You should also have a fundamental understanding of how streaming works on Vercel. To learn more see What is streaming?.
- You should be using Node.js 18 or later and the latest version of the Vercel CLI.
- You should copy your OpenAI API key in the
.env.local
file with nameOPENAI_API_KEY
. See the AI SDK docs for more information on how to do this. - Install the
ai
and@ai-sdk/openai
packages:pnpm i ai openai
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// This method must be named GET
export async function GET() {
// Make a request to OpenAI's API based on
// a placeholder prompt
const response = streamText({
model: openai('gpt-4o-mini'),
messages: [{ role: 'user', content: 'What is the capital of Australia?' }],
});
// Respond with the stream
return response.toTextStreamResponse({
headers: {
'Content-Type': 'text/event-stream',
},
});
}
If your workload requires longer durations, you should consider enabling fluid compute, which has higher default max durations and limits across plans.
Maximum durations can be configured for Node.js functions to enable streaming responses for longer periods. See max durations for more information.
You can stream responses from Vercel Functions that use the Python runtime.
When your function is streaming, it will be able to take advantage of the extended runtime logs, which will show you the real-time output of your function, in addition to larger and more frequent log entries. Because of this potential increase in frequency and format, your Log Drains may be affected. We recommend ensuring that your ingestion can handle both the new format and frequency.
Was this helpful?