• Limit on-demand concurrent builds to one build per branch

    On-Demand Concurrent Builds let builds skip the queue and run immediately, instead of waiting for other deployments to finish.

    You can now configure this feature to run one active build per branch. When enabled, deployments to the same branch are queued. After the active build finishes, only the most recent queued deployment starts building. Older queued deployments are skipped. Deployments on different branches can still build concurrently.

    Enable this in your project settings or learn more in the documentation.

  • Bookmark domains on Vercel Domains

    You can now bookmark domains on Vercel Domains for purchasing at a later date.

    To save a domain, either:

    • Click on a search result, and select "Save for later"

    • Select the bookmark icon on a domain in your cart

    You can then view your saved domains and add them to your cart from the "Saved" tab.

    Try it now at vercel.com/domains

  • Introducing bash-tool for filesystem-based context retrieval

    We open-sourced bash-tool, the Bash execution engine used by our text-to-SQL agent that we recently re-architected to reduce our token usage, improve the accuracy of the agent's responses, and improve the agent's overall performance.

    bash-tool gives your agent a way to find the right context by running bash-like commands over files, then returning only the results of those tool calls to the model.

    Context windows can fill up quickly if you include large amounts of text into a prompt. As agents tend to do well with Unix-style workflows like find, grep, jq, and pipes, with bash-tool you can now keep large context local, in a filesystem, and let the agent use those commands to retrieve smaller slices of context on demand.

    bash-tool provides bash, readFile, and writeFile tools for AI SDK agents, working with both in-memory and sandboxed environments, and:

    • runs on top of just-bash, which interprets bash scripts directly in TypeScript without a shell process or arbitrary binary execution

    • you can preload that filesystem with your files at startup, so your agent can search them when needed without pasting everything into the prompt

    • it supports running in-memory or in a custom isolated VM

    agent.ts
    import { createBashTool } from "bash-tool";
    const { tools } = await createBashTool({
    files: { "src/index.ts": "export const hello = 'world';" },
    });
    const agent = new ToolLoopAgent({ model, tools });

    Using bash-tool with an in-memory filesystem

    If you need a real shell, a real filesystem, or custom binaries, you can run the same tool against a Sandbox-compatible API for full VM isolation.

    agent.ts
    import { createBashTool } from "bash-tool";
    import { Sandbox } from "@vercel/sandbox";
    const sandbox = await Sandbox.create();
    const { tools } = await createBashTool({ sandbox });
    const agent = new ToolLoopAgent({ model, tools });

    Using bash-tool with a Vercel sandbox

    Try bash-tool in your agent

    Install the package along with AI SDK v6, and start building your file system agent.

    Get started

  • Secure Compute is now self-serve

    vsc v3vsc v3

    Teams can now create, update and delete Secure Compute networks directly from the Vercel dashboard, the API, and Terraform.

    Secure Compute networks provide private connectivity between your Vercel Functions and backend infrastructure and let you control regional placement, addressing, egress and failover of your projects.

    Now you can:

    • Self-service network management with no contract amendment or manual provisioning required.‍

    • Manage existing Secure Compute capabilities directly, including Region and Availability Zone selection, active/passive failover, private CIDR selection, NAT/egress behavior are now manageable via self-serve flows.

    • Automate & integrate with full network lifecycle support through the Dashboard, public API, and Terraform so teams can manage networks interactively or declaratively.

    • And coming soon: Self-serve Site-to-Site VPN connections via the Dashboard, API, and Terraform, Secure Compute for Pro customers and PrivateLink connectivity.

    This is available today for Enterprise teams.

    Check out the documentation to get started.​​​​‌‍​‍​‍‌‍

  • Vercel Agent code reviews now follow your code guidelines

    Vercel Agent Coding Guidelines Support (2)Vercel Agent Coding Guidelines Support (2)

    Vercel Agent now applies your repository’s coding guidelines during code reviews.

    Add an AGENTS.md file to your repository, or use existing formats like CLAUDE.md, .cursorrules, or .github/copilot-instructions.md.

    Agent automatically detects and applies these guidelines to provide context-specific feedback for your codebase.

    No configuration required. Learn more about code guidelines.

  • AI Gateway support for Claude Code

    You can now use Claude Code through Vercel AI Gateway via its Anthropic-compatible API endpoint.

    Route Claude Code requests through AI Gateway to centralize usage and spend, view traces in observability, and benefit from failover between providers for your model of choice.

    Log out if you're already logged in, then set these environment variables to configure Claude Code to use AI Gateway:

    claude /logout
    export ANTHROPIC_BASE_URL="https://ai-gateway.vercel.sh"
    export ANTHROPIC_AUTH_TOKEN="your-ai-gateway-api-key"
    export ANTHROPIC_API_KEY=""

    Setting ANTHROPIC_API_KEY to an empty string is required. Claude Code checks this variable first, and if it's set to a non-empty value, it will use that instead of ANTHROPIC_AUTH_TOKEN.

    Start Claude Code. Requests will route through AI Gateway:

    claude

    See the Claude Code documentation for details.

  • MiniMax M2.1 now live on Vercel AI Gateway

    You can now access MiniMax's latest model, M2.1, with Vercel's AI Gateway and no other provider accounts required.

    MiniMax M2.1 is faster than its predecessor M2, with clear improvements specifically in coding use cases and complicated multi-step tasks with tool calls. M2.1 writes higher quality code, is better at following instructions for difficult tasks, and has a cleaner reasoning process. The model has breadth in addition to depth, with improved performance across multiple coding languages (Go, C++, JS, C#, TS, etc.) and refactoring, feature adds, bug fixes, and code review.

    To start building with MiniMax M2.1 via AI SDK, set the model to minimax/minimax-m2.1:

    import { streamText } from 'ai';
    const result = streamText({
    model: 'minimax/minimax-m2.1',
    prompt:
    `Initialize a React + TypeScript project of a sunrise.
    Generate assets with an image tool, compute sun position
    with a time tool, animate it, run tests, and produce a build.`
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

    AI Gateway: Track top AI models by usage

    The AI Gateway model leaderboard ranks the most used models over time by total token volume across all traffic through the Gateway. Updates regularly.

    View the leaderboard

  • GLM-4.7 available on Vercel AI Gateway

    You can now access Z.ai's latest model, GLM-4.7, with Vercel's AI Gateway and no other provider accounts required.

    GLM-4.7 comes with major improvements in coding, tool usage, and multi-step reasoning, especially with complex agentic tasks. The model also has a more natural tone for a better conversational experience and can product a more refined aesthetic for front-end work.

    To start building with GLM-4.7 via AI SDK, set the model to zai/glm-4.7:

    import { streamText } from 'ai';
    const result = streamText({
    model: 'zai/glm-4.7',
    prompt:
    `Create an interactive weather timeline app, fetch forecasts
    via weather tool, normalize data, render animated charts,
    cache results, and produce a production build.`
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

    AI Gateway: Track top AI models by usage

    The AI Gateway model leaderboard ranks the most used models over time by total token volume across all traffic through the Gateway. Updates regularly.

    View the leaderboard