• Vercel Flags are now optimized for agents

    The Vercel CLI now supports programmatic flag management, giving teams a direct way to create and manage feature flags from the terminal without opening the dashboard.

    vercel flags create my-flag

    Add the Flags SDK skill

    Building on this foundation, the Flags SDK skill lets AI agents generate and manage flags through natural language prompts.

    The skill leverages the CLI under the hood, enabling agents to implement server-side evaluation that prevents layout shifts and maintains confidentiality. Using the SDK's adapter pattern, agents can connect multiple providers and evaluate user segments without rewriting core flag logic.

    npx skills add vercel/flags

    Once added, try prompting your agent with this prompt to create your first flag.

    Add a feature flag for setting up a new promotion banner

    Start generating flags with the Flags SDK skill.

  • Subscribe to webhook events for Vercel Flags

    You can now subscribe to webhook events for deeper visibility into feature flag operations on Vercel.

    New event categories include:

    • Flag management: Track when teams create, modify, or delete flags across your project.

    • Segment management: Receive alerts when segments are created, updated, or deleted.

    These events help teams build monitoring directly into their workflows. You can track the complete lifecycle of your flags, monitor changes across projects, and integrate feature flag data with your external systems.

    Read the documentation to start tracking feature flag events.

  • Chat SDK adds WhatsApp adapter support

    Chat SDK WhatsApp OG image

    Chat SDK now supports WhatsApp, extending its single-codebase approach to Slack, Discord, GitHub, Teams, and Telegram with the new WhatsApp adapter.

    Teams can build bots that support messages, reactions, auto-chunking, and read receipts. The adapter handles multi-media downloads (e.g., images, voice messages, stickers) and supports location sharing with Google Maps URLs.

    Try the WhatsApp adapter today:

    import { Chat } from "chat";
    import { createWhatsAppAdapter } from "@chat-adapter/whatsapp";
    const bot = new Chat({
    userName: "mybot",
    adapters: {
    whatsapp: createWhatsAppAdapter(),
    },
    });
    bot.onNewMention(async (thread, message) => {
    await thread.post(`You said: ${message.text}`);
    });

    The adapter does not support message history, editing, or deletion. Cards render as interactive reply buttons with up to three options, and fall back to formatted text. Additionally, WhatsApp enforces a 24-hour messaging window, so bots can only respond within that period.

    Read the documentation to get started or browse the adapters directory.

    Special thanks to @ghellach, whose community contribution in PR #102 laid the groundwork for this adapter.

  • Improved data collection for Web Analytics and Speed Insights with resilient intake

    Web Analytics and Speed Insights version 2 introduces resilient intake to improve data collection reliability. By dynamically discovering endpoints instead of relying on a single predictable path, the new packages ensure you capture more complete traffic and performance data.

    To utilize resilient intake, update your packages and deploy your changes. No other configuration is required, and existing implementations will continue working as before. It's available to all teams at no additional cost.

    Install the latest versions

    npm install @vercel/analytics@latest

    npm install @vercel/speed-insights@latest

    These packages include a license change from Apache-2.0 to MIT to align with other open source packages. Nuxt applications can leverage Nuxt modules for a one-line installation of Speed Insights and Web Analytics.

    Update your packages to capture more data, or explore the Web Analytics documentation and Speed Insights documentation.

    Damien Simonin Feugas

  • Vercel Sandbox now supports1 vCPU + 2 GB RAM configurations

    Vercel Sandbox now supports creating Sandboxes with only 1 vCPU and 2 GB of RAM. This is ideal for single-threaded or light workloads which don't benefit from additional system resources. When unspecified, the default is still 2 vCPUs and 4 GB of RAM.

    Get started by setting the resources.vcpus option in the SDK:

    import { Sandbox } from "@vercel/sandbox";
    const sandbox = await Sandbox.create({
    resources: { vcpus: 1 },
    });

    Or using the --vcpus option in the CLI:

    sandbox create --connect --vcpus 1

    Learn more about Sandbox in the docs.

  • Chat SDK now has an adapter directory

    Chat SDK now has an adapter directory, so you can search platform and state adapters from Vercel and the community.

    These include:

    • Official adapters: maintained by the core Chat SDK team and published under @chat-adapter/*

    • Vendor-official adapters: built and maintained by the platform companies, like Resend and Beeper. These live in their GitHub org and are documented in their docs.

    • Community adapters are built by third-party developers, and can be published by one, following the same model as AI SDK community providers.

    We encourage teams to build and submit adapters to be included in this new directory, like Resend's adapter that connects email to Chat SDK:

    import { Chat } from "chat";
    import { MemoryStateAdapter } from "@chat-adapter/state-memory";
    import { createResendAdapter } from "@resend/chat-sdk-adapter";
    const resend = createResendAdapter({
    fromAddress: "bot@yourdomain.com",
    });
    const chat = new Chat({
    userName: "email-bot",
    adapters: { resend },
    state: new MemoryStateAdapter(),
    });
    // New inbound email (new thread)
    chat.onNewMention(async (thread, message) => {
    await thread.subscribe();
    await thread.post(`Got your email: ${message.text}`);
    });

    An agent workflow that triages support emails or sends follow-ups uses the same handlers and card primitives as a Slack bot.

    Browse the adapter directory or read the contributing guide to learn how to build, test, document, and publish your own adapter.

  • AI Gateway supports OpenAI's Responses API

    OpenAI's Responses API is now available through AI Gateway. The Responses API is a modern alternative to the Chat Completions API. Point your OpenAI SDK to AI Gateway's base URL and use the creator/model names to route requests. TypeScript and Python are both supported. All of the functionality in the Responses API was already accessible through AI Gateway via the AI SDK and Chat Completions API, but you can now use the Responses API directly.

    Link to headingWhat you can do

    • Text generation and streaming: Send prompts, get responses, stream tokens as they arrive

    • Tool calling: Define functions the model can invoke, then feed results back

    • Structured output: Constrain responses to a JSON schema

    • Reasoning: Control how much effort the model spends thinking with configurable effort levels

    Link to headingGetting started

    Install the OpenAI SDK and point it at AI Gateway.

    npm install openai

    import OpenAI from 'openai';
    const client = new OpenAI({
    apiKey: process.env.AI_GATEWAY_API_KEY,
    baseURL: 'https://ai-gateway.vercel.sh/v1',
    });

    Link to headingBasic example: text generation

    Send a prompt and get a response from any supported model.

    const response = await client.responses.create({
    model: 'openai/gpt-5.4',
    input: 'What is the best restaurant in San Francisco?',
    });

    Link to headingStructured output with reasoning

    Combine reasoning levels with a JSON schema to get structured responses.

    const response = await client.responses.create({
    model: 'anthropic/claude-sonnet-4.6',
    input: 'Build a Next.js app with auth and a dashboard page.',
    reasoning: { effort: 'high' },
    text: {
    format: {
    type: 'json_schema',
    name: 'app_plan',
    strict: true,
    schema: {
    type: 'object',
    properties: {
    files: { type: 'array', items: { type: 'string' } },
    summary: { type: 'string' },
    },
    required: ['files', 'summary'],
    additionalProperties: false,
    },
    },
    },
    });

    To learn more about the Responses API, read the documentation.

  • Chat SDK adds table rendering and streaming markdown

    Chat SDK now renders tables natively across all platform adapters and converts markdown to each platform's native format during streaming.

    The Table() component is a new card element in Chat SDK that gives you a clean, composable API for rendering tables across every platform adapter. Pass in headers and rows, and Chat SDK handles the rest.

    import { Table } from "chat";
    await thread.post(
    Table({
    headers: ["Model", "Latency", "Cost"],
    rows: [
    ["claude-4.6-sonnet", "1.2s", "$0.003"],
    ["gpt-4.1", "0.9s", "$0.005"],
    ],
    })
    );

    The adapter layer converts the table to the best format each platform supports.

    Slack renders Block Kit table blocks, Teams and Discord use GFM markdown tables, Google Chat uses monospace text widgets, and Telegram converts tables to code blocks. GitHub and Linear already supported tables through their markdown pipelines and continue to work as before. Plain markdown tables (without Table()) are also converted through the same pipeline.

    Streaming markdown has also improved across the board. Slack's native streaming path now renders bold, italic, lists, and other formatting in real time as the response arrives, rather than resolving when the message is complete. All other platforms use the fallback streaming path, so streamed text now passes through each adapter's markdown-to-native conversion pipeline at each intermediate edit. Previously, these adapters received raw markdown strings, so users saw literal **bold** syntax until the final message.

    Adapters without platform-specific rendering now include improved defaults, so new formatting capabilities work across all platforms without requiring adapter-by-adapter updates.

    Update to the latest Chat SDK to get started, and view the documentation.