• Updated GitHub App Permissions

    The Vercel GitHub App now requests two additional repository permissions on install: Actions (read) and Workflows (read & write).

    These permissions enable the Vercel Agent to read workflow run logs to help diagnose CI failures and configure CI workflow files on your behalf. This also allows v0 to more effectively create complete, production-ready repositories with properly configured CI/CD pipelines. To take advantage of these features, you'll need to accept the updated permissions in your GitHub organization or account settings.

    For full details on all permissions requested by the Vercel GitHub App, check out the documentation.

  • Introducing the Vercel plugin for coding agents

    Claude Code and Cursor can now further understand Vercel projects using the new Vercel plugin and a full platform knowledge graph.

    The plugin observes real-time activity, including file edits and terminal commands, to dynamically inject Vercel knowledge into the agent's context. Key capabilities include:

    • Platform knowledge: Access 47+ skills covering the Vercel platform, including Next.js, AI SDK, Turborepo, Vercel Functions, and Routing Middleware, powered by a relational knowledge graph

    • Specialized tooling: Use three specialist agents (AI Architect, Deployment Expert, Performance Optimizer) and five slash commands (/bootstrap, /deploy, /env, /status, /marketplace)

    • Context management: An injection engine and project profiler rank, deduplicate, and budget-control loaded context

    • Code validation: PostToolUse validation catches deprecated patterns, sunset packages, and stale APIs in real time

    Instead of standard retrieval, the plugin compiles pattern matchers at build time and runs a priority-ranked injection pipeline across seven lifecycle hooks. Skills fire when glob patterns, bash regexes, import statements, or prompt signals match, and are then deduplicated across the session to ensure accurate agent responses.

    The plugin currently supports Claude Code and Cursor, with OpenAI Codex support coming soon.

    Install the plugin via npx:

    npx plugins add vercel/vercel-plugin

    Directly in Claude Code via the official marketplace:

    /plugin install vercel

    Or directly in Cursor:

    /add-plugin vercel

    Explore the source code in the Vercel plugin repository.

  • Updates to Terms of Service

    Agents are reshaping the tools developers use, the applications they build, and the infrastructure that runs them. We’ve updated our Terms of Service and Privacy Policy to reflect how Vercel uses data to support agentic features, improve our platform, and contribute to the AI ecosystem.

    Link to headingWhat is changing?

    Link to headingAgentic infrastructure capabilities

    We are developing features that allow Vercel to do more to keep your apps running efficiently, including:

    • Proactively investigating and mitigating incidents

    • Analyzing web app performance data and suggesting improvements

    • Identifying where your spend is going and creating PRs to optimize usage

    Vercel may also use data to help improve our tools to fight fraud and abuse of the Vercel platform. 

    Link to headingOptional AI model training

    You may choose whether to allow Vercel to:

    • Use your code and Vercel agent chats to improve Vercel models

    • Share your code and Vercel agent chats with AI model providers for training purposes only

    Link to headingDefaults by plan for optional AI model training:

    • Hobby (including Trial Pro): Opted in for AI model training by default, with self-serve opt-out in Team and Project Settings

    • Pro (paid): Opted out of AI model training by default, with self-serve opt-in in Team and Project Settings

    • Enterprise: Opted out of any AI model training

    Sharing this data helps improve the performance of agentic tools for everyone. Participating in this model training program is fully optional, with easy opt-out in Team Settings → Data Preferences.  If you choose to opt out by March 31st 2026 11:59:59 PST ,Vercel will not use your data to train AI or share it with third parties. If you choose to opt out after March 31st 2026 11:59:59 PST, your data will not be used or shared from that point forward.

    If you are opted in, the training datasets would include:

    • Code and Vercel agent chats

    • Build and deployment telemetry data and build errors

    • Aggregate traffic stats

    All personal information, account details, environment variables, API keys, and other sensitive content is anonymized and redacted before use or sharing. 

    Other changes to our Terms of Service include updated dispute resolution processes, billing practices, and provisions to reflect compliance with the latest data protection laws. While arbitration has always been our method of resolving disputes with international and Enterprise customers, it now also applies to U.S.-based customers. The opt-out process described in Section 21 of our Terms of Service is unchanged.

    Link to headingFrequently Asked Questions

  • Use GPT 5.4 Mini and Nano on AI Gateway

    GPT-5.4 Mini and GPT-5.4 Nano from OpenAI are now available on Vercel AI Gateway. Both models deliver state-of-the-art performance for their size class in coding and computer use, and are built for sub-agent workflows where multiple smaller models coordinate on parts of a larger task.

    The models also support the verbosity and reasoning level parameters, giving you control over response detail and how much the model reasons before answering.

    GPT-5.4 Mini

    GPT-5.4 Mini handles code generation, tool orchestration, and multi-step browser interactions more reliably than previous mini-tier models. It's a strong default for agentic tasks that need to balance capability and cost. To use this model, set model to openai/gpt-5.4-mini in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'openai/gpt-5.4-mini',
    prompt:
    `Scaffold a new Next.js API route that connects to our
    Postgres database, validates the incoming webhook payload,
    and writes the event to the audit_logs table.`,
    });

    GPT-5.4 Nano

    GPT-5.4 Nano performs close to GPT-5.4 Mini in evaluations at a lower price point. The model is well-suited for high-volume use cases like sub-agent workflows where cost scales with the number of parallel calls. To use this model, set model to openai/gpt-5.4-nano in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'openai/gpt-5.4-nano',
    prompt:
    `Check each file in the PR diff for unused imports,
    flag any that can be removed, and return the results
    as a JSON array with file path and line number.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Streamdown 2.5 is here

    streamdown darkstreamdown dark

    Streamdown is a React component library that makes rendering streaming markdown content easy and beautiful. Built for AI-powered applications, it handles the unique challenges that arise when markdown is tokenized and streamed in real time.

    v2.5 adds inline KaTeX support, staggered streaming animations, and a round of fixes for code blocks, CSV exports, and better Tailwind v3 compatibility.

    Streaming parser improvements

    The new inlineKatex option auto-completes $formula to $formula$ during streaming, avoiding ambiguity with currency symbols, and the option defaults to false. Block KaTeX completion is also fixed when streaming produces a partial closing $.

    Separately, single ~ between word characters (e.g. 20~25°C) is now escaped to prevent false strikethrough rendering, controlled via a new singleTilde option that is enabled by default.

    Staggered streaming animations

    Streaming word and character animations now cascade sequentially rather than animating all at once. The timing is configurable via a new stagger option (default 40ms). Set stagger: 0 to restore the previous behavior.

    Code blocks

    Custom renderers now receive the raw metastring from the code fence via a new optional meta prop, and the lineNumbers prop lets you disable line numbers.

    Long lines now scroll horizontally instead of being clipped, completed blocks no longer re-render when new streaming content arrives, and unknown or truncated language identifiers fall back to plain text highlighting instead of throwing an error.

    Bug fixes

    save() now prepends a UTF-8 BOM for text/csv content, so Excel on Windows correctly detects encoding. Tailwind v4-only *:last: / *:first: syntax is replaced with arbitrary variant equivalents, fixing caret rendering in Tailwind CSS v3.

    Read the documentation to get started.

  • LiteLLM server now supported on Vercel

    You can now deploy LiteLLM server on Vercel, giving developers LLM access with an OpenAI-compatible gateway connecting to any supported provider, including Vercel AI Gateway.

    app.py
    from litellm.proxy import proxy_server
    app = proxy_server.app

    Basic LiteLLM Gateway app

    To route a single model through Vercel AI Gateway, use the below configuration in litellm_config.yaml:

    litellm_config.yaml
    - model_name: gpt-5.4-gateway
    litellm_params:
    model: vercel_ai_gateway/openai/gpt-5.4
    api_key: os.environ/VERCEL_AI_GATEWAY_API_KEY

    Routing a model through Vercel AI Gateway in LiteLLM

    Deploy LiteLLM on Vercel or learn more on our documentation

  • next-forge 6 is now available

    next-forge is a production-grade Turborepo template for Next.js apps, designed to be a comprehensive, opinionated starting point for new apps.

    This major release comes with a number of DX improvements, an agent skill, and new guides for quickstart, Docker, and migration paths.

    next-forge skill

    You can now install a next-forge skill into your preferred agent, giving it structured knowledge of next-forge architecture, packages, and common tasks.

    npx skills add vercel/next-forge

    Bun by default

    The default package manager is now Bun. The CLI init script detects your current package manager before prompting, and pnpm, npm, and yarn are still supported through the init flow.

    Graceful degradation

    Every optional integration now silently degrades when its environment variables are missing, rather than throwing an error. Stripe, PostHog, BaseHub, and feature flags all return safe defaults. The only required environment variable to boot the project is DATABASE_URL.

    New guides

    The quickstart guide gets you to a running dev server in a few minutes with just Clerk and a Postgres database.

    There is also a new Docker deployment guide and migration guides are available for Appwrite (auth, database, storage), Convex (database), and Novu (notifications).

    Read the documentation to get started.

  • Vercel now supports Domain Connect as a DNS Provider

    Resend Domain ConnectResend Domain Connect

    Vercel now supports Domain Connect as a DNS provider, enabling external services to configure Vercel domains.

    Teams that use Vercel as their DNS host can set up their domain in one click without manually copying DNS records. This provides faster setup, fewer copy-and-paste mistakes, and less provider-specific documentation.

    We are launching this capability with Resend. When configuring email for a custom domain in Resend, teams can automatically provision the necessary DNS records directly on their associated Vercel domain.

    This update streamlines domain management for domains purchased or transferred to Vercel.

    To request support for a specific Domain Connect template, contact our team.