In this tutorial, you will build and deploy an AI chat agent using Next.js and the AI SDK that:
- Engages in natural conversations with users about the weather
- Automatically calls a weather API tool when users ask about weather conditions
- Streams responses in real-time for a smooth user experience
- Integrates with a backend weather service built with Express, FastAPI, or Nitro
- Node.js and pnpm installed locally
- A Vercel account and project with AI Gateway access
- AI Gateway authentication with an OIDC token configured with your Vercel project or an AI Gateway API key
- One of the backend weather APIs running (Express, FastAPI, or Nitro)
- Basic understanding of Next.js and React
Initialize a new Next.js project with the App Router:
pnpm create next-app@latest nextjs-agent
When prompted, select the following options:
- TypeScript: Yes
- ESLint: Yes
- Tailwind CSS: Yes
- App Router: Yes
- Use
src/
directory: No - Import alias: No
Navigate to your project directory with cd nextjs-agent
Install the AI SDK and required packages:
pnpm i ai @ai-sdk/react zod react-markdown
These packages provide:
ai
: Core AI SDK with agent and tool calling capabilities (version 5 required)@ai-sdk/react
: React hooks for streaming chat interfaces (version 2 required)zod
: Schema validation for tool inputsreact-markdown
: Render formatted responses in the chat UI
Option 1: Use your Vercel project's OIDC token
Link your code to a Vercel project and pull the environment variables
vercel linkvercel env pull
Option 2: Create an AI Gateway API key
Go to your Vercel team's AI Gateway API keys dashboard and create an API key. Create a .env.local
file in your project root with your AI Gateway key:
AI_GATEWAY_KEY=your_ai_gateway_key_here
Create lib/agent.ts
and add the agent configuration with a weather tool:
import { Experimental_Agent as Agent, Experimental_InferAgentUIMessage as InferAgentUIMessage, stepCountIs, tool,} from 'ai';import { z } from 'zod';
export const weatherAgent = new Agent({ model: 'openai/gpt-5', system: 'You are a helpful weather assistant. Use the getWeather tool to fetch current weather information for cities.', tools: { getWeather: tool({ description: 'Get the current weather for a city', inputSchema: z.object({ city: z.string().describe('The city name to get weather for'), }), execute: async ({ city }) => { try { const response = await fetch( `http://localhost:3001/api/weather/${encodeURIComponent(city)}` );
if (!response.ok) { throw new Error(`Failed to fetch weather: ${response.statusText}`); }
const data = await response.json(); return data; } catch (error) { return { error: `Unable to fetch weather data for ${city}. Make sure the weather API is running on port 3001.`, }; } }, }), }, stopWhen: stepCountIs(10),});
export type WeatherAgentUIMessage = InferAgentUIMessage<typeof weatherAgent>;
This agent is configured as follows:
- Uses
GPT-5
as the underlying model - Defines a
getWeather
tool that calls your backend weather API - Uses
Zod
schema validation for type-safe tool inputs - Includes error handling for API failures
- Limits the agent to 10 reasoning steps to prevent infinite loops
Create app/api/chat/route.ts
to handle agent requests:
import { weatherAgent } from '@/lib/agent';
export async function POST(request: Request) { const body = await request.json();
// Chat interface using agent.respond() return weatherAgent.respond({ messages: body.messages, });}
The respond()
method handles the complete agent workflow:
- Processes conversation history
- Determines when to call tools
- Streams responses back to the client
- Manages multi-turn conversations
Update app/page.tsx
to create an interactive chat interface:
'use client';
import { useChat } from '@ai-sdk/react';import { DefaultChatTransport } from 'ai';import { useState } from 'react';import ReactMarkdown from 'react-markdown';
export default function Page() { const { messages, sendMessage, status } = useChat({ transport: new DefaultChatTransport({ api: '/api/chat', }), }); const [input, setInput] = useState('');
return ( <div style={{ display: 'flex', flexDirection: 'column', height: '100vh', backgroundColor: '#ffffff', color: '#000000', fontFamily: 'system-ui, sans-serif', }} > <div style={{ padding: '16px', borderBottom: '1px solid #e5e5e5', display: 'flex', alignItems: 'center', gap: '12px', }} > <h1 style={{ margin: 0, fontSize: '18px', fontWeight: '600' }}> Weather Agent </h1> </div>
<div style={{ flex: 1, overflowY: 'auto', padding: '16px', display: 'flex', flexDirection: 'column', gap: '12px', }} > {messages.map(message => ( <div key={message.id} style={{ display: 'flex', justifyContent: message.role === 'user' ? 'flex-end' : 'flex-start', }} > <div style={{ maxWidth: '80%', padding: '12px 16px', borderRadius: '16px', backgroundColor: message.role === 'user' ? '#f0f0f0' : 'transparent', }} > {message.parts.map((part, index) => part.type === 'text' ? ( <div key={index}> <ReactMarkdown components={{ p: ({ children }: any) => ( <p style={{ margin: '0 0 8px 0' }}>{children}</p> ), ul: ({ children }: any) => ( <ul style={{ margin: '0 0 8px 0', paddingLeft: '20px' }}> {children} </ul> ), li: ({ children }: any) => ( <li style={{ marginBottom: '4px' }}>{children}</li> ), strong: ({ children }: any) => ( <strong style={{ fontWeight: '600' }}>{children}</strong> ), }} > {part.text} </ReactMarkdown> </div> ) : null, )} </div> </div> ))} {status === 'streaming' && ( <div style={{ display: 'flex', justifyContent: 'flex-start', }} > <div style={{ padding: '12px 16px', borderRadius: '16px', }} > <div style={{ display: 'flex', gap: '4px', alignItems: 'center', }} > <div style={{ width: '6px', height: '6px', borderRadius: '50%', backgroundColor: '#999', animation: 'pulse 1.4s ease-in-out infinite', }} /> <div style={{ width: '6px', height: '6px', borderRadius: '50%', backgroundColor: '#999', animation: 'pulse 1.4s ease-in-out 0.2s infinite', }} /> <div style={{ width: '6px', height: '6px', borderRadius: '50%', backgroundColor: '#999', animation: 'pulse 1.4s ease-in-out 0.4s infinite', }} /> </div> </div> </div> )} </div>
<style>{` @keyframes pulse { 0%, 80%, 100% { opacity: 0.3; transform: scale(0.8); } 40% { opacity: 1; transform: scale(1); } } `}</style>
<form onSubmit={e => { e.preventDefault(); if (input.trim()) { sendMessage({ text: input }); setInput(''); } }} style={{ padding: '16px', borderTop: '1px solid #e5e5e5', display: 'flex', gap: '8px', }} > <input value={input} onChange={e => setInput(e.target.value)} disabled={status !== 'ready'} placeholder="Send a message..." style={{ flex: 1, padding: '12px 16px', borderRadius: '24px', border: '1px solid #e5e5e5', backgroundColor: '#ffffff', color: '#000000', fontSize: '14px', outline: 'none', }} /> <button type="submit" disabled={status !== 'ready'} style={{ width: '40px', height: '40px', borderRadius: '50%', border: 'none', backgroundColor: status !== 'ready' ? '#e5e5e5' : '#000000', color: '#ffffff', cursor: status !== 'ready' ? 'not-allowed' : 'pointer', fontSize: '18px', display: 'flex', alignItems: 'center', justifyContent: 'center', }} > ↑ </button> </form> </div> );}
This chat UI provides:
- Real-time streaming with loading indicators
- Markdown rendering for formatted responses
Before testing, you need a weather API backend running. Use one of the following guides to set up a weather API using the backend of your choice:
- How to Build a Weather API with Express and Vercel
- How to Build a Weather API with FastAPI and Vercel
- How to Build a Weather API with Nitro and Vercel
Return to your Next.js
project and start the development server:
cd ../nextjs-agentpnpm dev
Start your weather API backend from a new terminal using vercel dev
and make sure that it runs in http://localhost:3001.
Open http://localhost:3000 in your browser. Try these example conversations:
- "What's the weather in London?"
- "Tell me about the weather in San Francisco"
- "How's the temperature in Tokyo today?"
- "Is it hot in Dubai right now?"
The agent will:
- Understand your weather request
- Extract the city name
- Call the `getWeather` tool automatically
- Format and present the weather data in a conversational way
- If you chose the AI Gateway API key to authenticate, add it to your Vercel's project environment variables dashboard. Otherwise, the OIDC token is already configured.
- Push the changes to your remote repository or run the
vercel
cli command - Vercel will create a new preview deployment for you to test
- Merge to
main
branch or runvercel --prod
to deploy to Production
Visit your production deployment link to chat with your AI weather agent.
The AI SDK's agent system provides intelligent tool calling that:
- Automatically determines when to use tools based on user messages and available defined tools
- Include type-safe
zod
schemas that check that tools receive valid inputs - Allow multi-step reasoning to allow for multiple tools to be called
Review How to build AI Agents with Vercel and the AI SDK to understand the fundamentals of building agents.
Consider adding the following:
- Retry logic for failed API calls
- Fallback responses when tools fail
- Detailed error logging for debugging
Protect your API endpoint by limiting call frequency to the LLM and to your tool endpoints by using a tool such as the Vercel firewall rate limiting SDK.
In this tutorial, you've built an AI chat agent that intelligently calls weather APIs based on natural language conversations.
You learned to:
- Configure the AI SDK with agent capabilities
- Define type-safe tools with
zod
schemas - Build a streaming chat UI
- Integrate with backend APIs for real-time data
- Handle tool calling and error scenarios
Extend your knowledge by:
- Adding more tools (currency conversion, news, stock prices)
- Implementing conversation history persistence
- Adding authentication and user sessions
- Building a mobile app with React Native and the same agent
Explore references