The bundle size limit for Vercel Functions using the Python runtime is now 500MB, increasing the maximum uncompressed deployment bundle size from 250MB.
GPT 5.3 Codex is now available on AI Gateway. GPT 5.3 Codex brings together the coding strengths of GPT-5.2-Codex and the reasoning depth of GPT-5.2 in a single model that's 25% faster and more token-efficient.
Built for long-running agentic work, the model handles research, tool use, and multi-step execution across the full software lifecycle, from debugging and deployment to product documents and data analysis. Additionally, you can steer it mid-task without losing context. For web development, it better understands underspecified prompts and defaults to more functional, production-ready output.
To use this model, set model to openai/gpt-5.3-codex in the AI SDK.
import{ streamText }from'ai';
const result =streamText({
model:'openai/gpt-5.3-codex',
prompt:
`Research our current API architecture, identify performance
bottlenecks, refactor the slow endpoints, add monitoring,
and deploy the changes to staging.`,
});
AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.
The Slack Agent Skill is now available, enabling developers to build and deploy Slack agents in a single session with their coding agent of choice.
The skill handles the complexity of OAuth configuration, webhook handlers, event subscriptions, and deployment so you can focus on what your agent should do rather than on infrastructure setup.
The wizard walks through five stages:
Project setup: Choose your LLM provider and initialize from the Slack Agent Template
Slack app creation: Generate a customized app manifest and create the app in Slack's console
Environment configuration: Set up signing secrets, bot tokens, and API keys with validation
Local testing: Run locally with ngrok and verify the integration
Production deployment: Deploy to Vercel with environment variables configured automatically
Install the skill and run the wizard by invoking it in your coding agent (for example, /slack-agent new in Claude Code).
npx skills add vercel-labs/slack-agent-skill
Try the skill to make your custom agent or use the Slack Agent Template to deploy right away and customize later.
Building chatbots across multiple platforms traditionally requires maintaining separate codebases and handling individual platform APIs.
Today, we're open sourcing the new Chat SDK in public beta. It's a unified TypeScript library that lets teams write bot logic once and deploy it to Slack, Microsoft Teams, Google Chat, Discord, GitHub, and Linear.
The event-driven architecture includes type-safe handlers for mentions, messages, reactions, button clicks, and slash commands. Teams can build user interfaces using JSX cards and modals that render natively on each platform.
The SDK handles distributed state management using pluggable adapters for Redis, ioredis, and in-memory storage.
Chat SDK post() functions accept an AI SDK text stream, enabling real-time streaming of AI responses and other incremental content to chat platforms.
import{ToolLoopAgent}from"ai";
const agent =newToolLoopAgent({
model:"anthropic/claude-4.6-sonnet",
instructions:"You are a helpful assistant.",
});
bot.onNewMention(async(thread, message)=>{
const result =await agent.stream({prompt: message.text});
await thread.post(result.textStream);
});
The framework starts with the core chat package and scales through modular platform adapters. Guides are available for building a Slack bot with Next.js and Redis, a Discord support bot with Nuxt, a GitHub bot with Hono, and automated code review bots.
Vercel Sandbox can now automatically inject HTTP headers into outbound requests from sandboxed code. This keeps API keys and tokens safely outside the sandbox VM boundary, so apps running inside the sandbox can call authenticated services without ever accessing the credentials.
Header injection is configured as part of the network policy using transform. When the sandbox makes an HTTPS request to a matching domain, the firewall adds or replaces the specified headers before forwarding the request.
// Code inside the sandbox calls AI Gateway without knowing the API key
const result =await sandbox.runCommand('curl',
['-s','https://ai-gateway.vercel.sh/v1/models']
);
This is designed for AI agent workflows where prompt injection is a real threat. Even if an agent is compromised, there's nothing to exfiltrate, as the credentials only exist in a layer outside the VM.
Injection rules work with all egress network policy configurations, including open internet access. To allow general traffic while injecting credentials for specific services:
Like all network policy settings, injection rules can be updated on a running sandbox without restarting it. This enables multi-phase workflows, inject credentials during setup, then remove them before running untrusted code:
Header overwrite: Injection applies to HTTP headers on outbound requests.
Full replacement: Injected headers overwrite any existing headers with the same name set by sandbox code, preventing the sandbox from substituting its own credentials.
Domain matching: Supports exact domains and wildcards (e.g., *.github.com). Injection only triggers when the outbound request matches.
Works with all policies: Combine injection rules with allow-all, or domain-specific allow lists.
Available to all Pro and Enterprise customers. Learn more in the documentation.
Support for the legacy now.json config file will be officially removed on March 31st, 2026. Migrate existing now.json files by renaming them to vercel.json, no other content changes are required.
For more advanced use cases, try vercel.ts for programmatic project configuration.
Learn more about configuring projects with vercel.json in the documentation.
Generate high-quality videos with natural motion and audio using xAI's Grok Imagine Video, now in AI Gateway. Try it out now via the v0 Grok Creative Studio, AI SDK 6 or by selecting the model in the AI Gateway playground.
Grok Imagine is known for realistic motion and strong instruction following:
Fast Generation: Generates clips in seconds rather than minutes
Instruction Following: Understands complex prompts and follow-up instructions to tweak scenes
Video Editing: Transform existing videos by changing style, swapping objects, or altering scenes
Audio & Dialogue: Native audio generation with natural, expressive voices and accurate lip-sync
Video generation is in beta and currently available for Pro and Enterprise plans and paid AI Gateway users.
v0 Grok Creative Studio: The v0 team created a template that is powered by AI Gateway to create and showcase Grok Video and Image generations.
AI SDK 6: Generate videos programmatically AI SDK 6's generateVideo.
import{ experimental_generateVideo as generateVideo }from'ai';
const{ videos }=awaitgenerateVideo({
model:'xai/grok-imagine-video',
prompt:'A golden retriever catching a frisbee mid-air at the beach',
});
Gateway Playground: Experiment with video models with no code in the configurable AI Gateway playground that's embedded in each model page. Compare providers, tweak prompts, and download results without writing code. To access, click any video gen model in the model list.
Generate stylized videos and transform existing footage with Alibaba's Wan models, now available through AI Gateway. Try them out now via AI SDK 6 or by selecting the models in the AI Gateway playground.
Wan produces artistic videos with smooth motion and can use existing content to keep videos consistent:
Character Reference (R2V): Extract character appearance and voice from reference videos/images to generate new scenes
Flash Variants: Faster generation times for quick iterations
Flexible Resolutions: Support for 480p, 720p, and 1080p output
Video generation is in beta and currently available for Pro and Enterprise plans and paid AI Gateway users.
AI SDK 6: Generate videos programmatically AI SDK 6's generateVideo.
import{ experimental_generateVideo as generateVideo }from'ai';
const{ videos }=awaitgenerateVideo({
model:'alibaba/wan-v2.6-t2v',
prompt:'Watercolor painting of a koi pond coming to life.',
});
Gateway Playground: Experiment with video models with no code in the configurable AI Gateway playground that's embedded in each model page. Compare providers, tweak prompts, and download results without writing code. To access, click any video gen model in the model list.
Generate a stylized video from a text description.
You can use detailed prompts and specify styles with the Wan models to achieve the desired output generation. The example here uses alibaba/wan-v2.6-t2v:
import{ experimental_generateVideo as generateVideo }from'ai';
const{ videos }=awaitgenerateVideo({
model:'alibaba/wan-v2.6-t2v',
prompt:
`Animated rainy Tokyo street at night, anime style,
neon signs reflecting on wet pavement, people with umbrellas
walking past, red and blue lights glowing through the rain.`,
Generate new scenes using characters extracted from reference images or videos.
In this example, 2 reference images of dogs are used to generate the final video.
Using alibaba/wan-v2.6-r2v-flash here, you can instruct the model to utilize the people/characters within the prompt. Wan suggests using character1, character2, etc. in the prompt for multi-reference to video to get the best results.
import{ experimental_generateVideo as generateVideo }from'ai';
const{ videos }=awaitgenerateVideo({
model:'alibaba/wan-v2.6-r2v-flash',
prompt:
`character1 and character2 are playing together on the beach in San Francisco
with the Golden Gate Bridge in the background, sunny day, waves crashing`,