VercelLogotypeVercelLogotype
    • AI Cloud
      • v0

        Build applications with AI

      • AI SDK

        The AI Toolkit for TypeScript

      • AI Gateway

        One endpoint, all your models

      • Vercel Agent

        An agent that knows your stack

      • Sandbox

        AI workflows in live environments

    • Core Platform
      • CI/CD

        Helping teams ship 6× faster

      • Content Delivery

        Fast, scalable, and reliable

      • Fluid Compute

        Servers, in serverless form

      • Observability

        Trace every step

    • Security
      • Bot Management

        Scalable bot protection

      • BotID

        Invisible CAPTCHA

      • Platform Security

        DDoS Protection, Firewall

      • Web Application Firewall

        Granular, custom protection

    • Company
      • Customers

        Trusted by the best teams

      • Blog

        The latest posts and changes

      • Changelog

        See what shipped

      • Press

        Read the latest news

      • Events

        Join us at an event

    • Learn
      • Docs

        Vercel documentation

      • Academy

        Linear courses to level up

      • Knowledge Base

        Find help quickly

      • Community

        Join the conversation

    • Open Source
      • Next.js

        The native Next.js platform

      • Nuxt

        The progressive web framework

      • Svelte

        The web’s efficient UI framework

      • Turborepo

        Speed with Enterprise scale

    • Use Cases
      • AI Apps

        Deploy at the speed of AI

      • Composable Commerce

        Power storefronts that convert

      • Marketing Sites

        Launch campaigns fast

      • Multi-tenant Platforms

        Scale apps with one codebase

      • Web Apps

        Ship features, not infrastructure

    • Tools
      • Marketplace

        Extend and automate workflows

      • Templates

        Jumpstart app development

      • Partner Finder

        Get help from solution partners

    • Users
      • Platform Engineers

        Automate away repetition

      • Design Engineers

        Deploy for every idea

  • Enterprise
  • Pricing
  • All Posts
  • Engineering
  • Community
  • Company News
  • Customers
  • v0
  • Security
  • Changelog
  • Press
  • No "" results found at this time.
    Try again with a different keyword.

    Featured articles

  • Feb 24

    Security boundaries in agentic architectures

    Most agents today run generated code with full access to your secrets. As more agents adopt coding agent patterns, where they read filesystems, run shell commands, and generate code, they're becoming multi-component systems that each need a different level of trust. While most teams run all of these components in a single security context, because that's how the default tooling works, we recommend thinking about these security boundaries differently. Below we walk through: The actors in agentic systems Where security boundaries should go between them An architecture for running agent and generated code in separate contexts All agents are starting to look like coding agents More agents are adopting the coding agent architecture. These agents read and write to a filesystem. They run bash, Python, or similar programs to explore their environment. And increasingly, agents generate code to solve particular problems. Even agents that aren't marketed as "coding agents" use code generation as their most flexible tool. A customer support agent that generates and runs SQL to look up account data is using the same pattern, just pointed at a database instead of a filesystem. An agent that can write and execute a script can solve a broader class of problems than one limited to a fixed set of tool calls. What goes wrong without boundaries Consider an agent debugging a production issue. The agent reads a log file containing a crafted prompt injection. The injection tells the agent to write a script that sends the contents of ~/.ssh and ~/.aws/credentials to an external server. The agent generates the script, executes it, and the credentials are gone. This is the core risk of the coding agent pattern. Prompt injection gives attackers influence over the agent, and code execution turns that influence into arbitrary actions on your infrastructure. The agent can be tricked into exfiltrating data from the agent's own context, generating malicious software, or both. That malicious software can steal credentials, delete data, or compromise any service reachable from the machine the agent runs on. The attack works because the agent, the code the agent generates, and the infrastructure all share the same level of access. To draw boundaries in the right places, you need to understand what these components are and what level of trust each one deserves. Four actors in an agentic system An agentic system has four distinct actors, each with a different trust level. Agent The agent is the LLM-driven runtime defined by its context, tools, and model. The agent runs inside an agent harness, which is the orchestration software, tools, and connections to external services that you build and deploy through a standard SDLC. You can trust the harness the same way you'd trust any backend service, but the agent itself is subject to prompt injection and unpredictable behavior. Information should be revealed on a need-to-know basis, i.e. an agent doesn't need to see database credentials to use a tool that executes SQL. Agent secrets Agent secrets are the credentials the system needs to function, including API tokens, database credentials, and SSH keys. The harness manages these responsibly, but they become dangerous when other components can access them directly. The entire architecture discussion below comes down to which components have a path to these secrets. Generated code execution The programs the agent creates and executes are the wildcard. Generated code can do anything the language runtime allows, which makes it the hardest actor to reason about. These programs may need credentials to talk to outside services, but giving generated code direct access to secrets means any prompt injection or model error can lead to credential theft. Filesystem The filesystem and broader environment are whatever the system runs on, whether a laptop, a VM, or a Kubernetes cluster. The environment can trust the harness, but it cannot trust the agent to have full access or run arbitrary programs without a security boundary. These four actors exist in every agentic system. The question is whether you draw security boundaries between them or let them all run in the same trust domain. A few design principles follow from these trust levels: The harness should never expose its own credentials to the agent directly The agent should access capabilities through scoped tool invocations, and those tools should be as narrow as possible. An agent performing support duties for a specific customer should receive a tool scoped to that customer's data, not a tool that accepts a customer ID parameter, since that parameter is subject to prompt injection. Generated programs that need their own credentials are a separate concern, which the architectures below address With these actors and principles in mind, here are the architectures we see in practice, ordered from least to most secure. Zero boundaries: today's default Coding agents like Claude Code and Cursor ship with sandboxes, but these are often off by default. In practice, many developers run agents with no security boundaries. In this architecture, there are no boundaries between any of the four actors. The agent, the agent's secrets, the filesystem, and generated code execution all share a single security context. On a developer's laptop, that means the agent can read .env files and SSH keys. On a server, it means access to environment variables, database credentials, and API tokens. Generated code can steal any of these, delete data, and reach any service the environment can reach. The harness may prompt the user for confirmation before certain actions, but there is no enforced boundary once a tool runs. Secret injection without sandboxing A secret injection proxy sits outside the main security boundary and intercepts outbound network traffic, injecting credentials only as requests travel to their intended endpoint. The harness configures the proxy with the credentials and the domain rules, but the generated code never sees the raw secret values. The proxy prevents exfiltration. Secrets can't be copied out of the execution context and reused elsewhere. But the proxy doesn't prevent misuse during active runtime. Generated software can still make unexpected API calls using the injected credentials while the system is running. Secret injection is a backward-compatible path from a zero-boundaries architecture. You can add the proxy without restructuring how components run. The tradeoff is that the agent and generated code still share the same security context for everything except the secrets themselves. Why sandboxing everything together isn't enough A natural instinct is to wrap the agent harness and the generated code in a shared VM or sandbox. A shared sandbox isolates both from the broader environment, and that's genuinely useful. Generated programs can't infiltrate the wider infrastructure. But in a shared sandbox, the agent and generated program still share the same security context. The generated code can still steal the harness's credentials or, if a secret injection proxy is in place, misuse credentials through the proxy. The sandbox protects the environment from the agent, but doesn't protect the agent from the agent's own generated code. Separating agent compute from sandbox compute The missing piece is running the agent harness and the programs the agent generates on independent compute, in separate VMs or sandboxes with distinct security contexts. The harness and the harness's secrets live in one context. The filesystem and generated code execution live in another, with no access to the agent's secrets. Both Claude Code and Cursor offer sandboxed execution modes today, but adoption in desktop environments has been low because sandboxing can cause compatibility issues. In the cloud, this separation is more practical. You can give the generated code a VM tailored for the type of software the agent needs to run, which can actually improve compatibility. In practice, this separation is a straightforward change. Agents perform tool invocations through an abstraction layer, and that abstraction makes it natural to route code execution to a separate environment without rewriting the agent itself. These two workloads have very different compute profiles, which means separating them lets you optimize each one independently. The agent harness spends most of its time waiting on LLM API responses. On Vercel, Fluid compute is a natural fit for this workload because billing pauses during I/O and only counts active CPU time, which keeps costs proportional to actual work rather than billing idle time. Generated code has the opposite profile. Agent-created programs are short-lived, unpredictable, and untrusted. Each execution needs a clean, isolated environment so that one program can't access secrets or state left behind by another. Sandbox products like Vercel Sandbox provide this through ephemeral Linux VMs that spin up per execution and are destroyed afterward. The VM boundary is what enforces the security context separation. Generated code inside the sandbox has no network path to the harness's secrets and no access to the host environment. The sandbox works in both directions. The sandbox shields the agent's secrets from generated code, and shields the broader environment from whatever the generated code does. Application sandbox with secret injection The strongest architecture combines the application sandbox with secret injection. The combination gives you two properties that neither achieves alone: Full isolation between the agent harness and generated programs, each running in their own security context No direct access to credentials for the generated code, which can use secrets through the injection proxy while running but can't read or exfiltrate them. Injected headers overwrite any headers the sandbox code sets with the same name, preventing credential substitution attacks. For production agentic systems, we recommend combining both. The agent harness runs as trusted software on standard compute. Generated code runs in an isolated sandbox. Secrets are injected at the network level, never exposed where generated code could access the secrets directly. This separation of agent compute from sandbox compute will become the standard architecture for agentic systems. Most teams haven't made this shift yet because the default tooling doesn't enforce it. The teams that draw these boundaries now will have a meaningful security advantage as agents take on more sensitive workloads. Safe secret injection is now available on Vercel Sandbox, read more in the documentation.

    Malte and Harpreet
  • Nov 24

    Security through design: Creating the improved Firewall experience

    At Vercel, we believe security should be intuitive, not intimidating. The best security tool is the one that's actually used. It should be clear, useful, and never in the way. But that's not always the norm. Security tooling can often feel like a tradeoff against shipping velocity. When UX is an afterthought, teams leave tools off or in "logging mode" forever, even when risks are high. That's why we've redesigned the Vercel Firewall experience from the ground up. The new UI helps you see more, do more, and feel confident in your app's resilience to attacks. Designing for every Vercel user The redesign started with listening. Users told us: I want to easily see active DDoS events I need more information on what the Firewall blocked I need a faster way to investigate traffic alerts or spikes Developers, SREs, and security teams all use the Firewall for maintenance and troubleshooting. They configure rules, monitor traffic, and respond to unusual activity. The new Firewall UI is designed for everyone using Vercel. It surfaces clear, actionable information, simplifies navigation, and helps teams resolve issues quickly when it matters most. A better way to see and secure your traffic The new design brings together visibility, context, and control in one view. A redesigned overview page provides a unified, high-signal view of Firewall activity New sidebar navigation offers one click to Overview, Traffic, Rules, and Audit Log Key activity and alert feeds surface unusual patterns and potential threats Improved inspection tools make it faster to move from alert to insight A new overview for all security events The Overview page is your high-level control center for the Firewall. It gives you a clear, birds-eye view of your site’s security posture. The traffic chart remains at the top, and we now surface the most important information based on recent activity. Four tables surface key Firewall activity so you can see the current state and act quickly when needed: Alerts shows recently mitigated DDoS attacks Rules displays top rule activity by volume Events list mitigations taken by Firewall Denied IPs show blocked connections by client IP Comprehensive traffic intelligence The new Traffic page focuses entirely on understanding activity across your site. You can now drill down into the detection signals that you care about the most, and filter those signals based on specific mitigation actions on the traffic tab. These updates make it easier to spot patterns or anomalies before they become problems. We now surface dedicated feeds for: Top IPs Top JA4 digests Top AS names Top User Agents Top Request Paths Rules with most activity Dedicated rules and activity Firewall Rules now have a dedicated tab on the sidebar. You can see and manage all of your WAF custom rules in this view, including Bot Protection, Managed Rulesets, IP Blocking, and more. We’ve also moved the Audit Log to a dedicated tab for full visibility into Firewall changes. Faster event inspection Clicking an alert or event now opens a detailed view directly in the page. You can dive deeper into Firewall activity and investigate suspicious traffic or DDoS attacks without context switching, helping you diagnose issues faster and take action immediately. Security designed for you Security is usability. When tools are clear and well-designed, teams act faster and stay safer, without sacrificing shipping velocity. We'd love your feedback. Explore the new Firewall experience today in your Vercel Dashboard and share your thoughts in the Vercel Community.

    +3
    Sage, Liz, and 3 others
  • Mar 10

    How we run Vercel's CDN in front of Discourse

    Vercel's CDN can front any application, not just those deployed natively on the platform, and it's simple to set up. This allows you to add firewall protection, DDoS mitigation, and observability to platforms like Discourse or WordPress without migrating them completely. The Vercel Community is an example of this architecture. It is a Discourse application hosted elsewhere, but we proxy it ourselves via Vercel's CDN, which both protects the app and gives us access to useful features in Vercel's website stack: Web Analytics gives us anonymized, cookie-free demographic and referrer data, so we can see where users are coming from and what they're looking for. Firewall gives us DDoS protection and has automatically prevented several attacks in the last year. Bot Management lets us block malicious scrapers while allowing trusted crawlers to index the forum and allow community posts to show up in ChatGPT searches. Some parts of the community platform, like Vercel Community live sessions, run directly on Vercel with Next.js. We use Vercel Microfrontends to mount a Next.js app on the same domain as the Discourse app, for three reasons: To create new pages that would be impractical to implement as CMS plugins. To overwrite existing Discourse pages that we can't fully customize. To keep users authenticated through Sign in with Vercel When the new pages are ready to launch, we add the path to our microfrontends configuration and users are rerouted seamlessly on the next deploy. Vercel as a CDN To set up Vercel as a CDN proxy like this, you need two domains: Inner host: The origin server where the site is actually hosted. This might look like your-site.discourse.com Outer host: The Vercel project domain that users interact with, such as community.vercel.com Ensure that all links on the site and its canonical URLs use the outer domain. Once those are in place, create a new project on Vercel that deploys to the outer host. You can then use vercel.ts (formerly vercel.json) to rewrite traffic to the inner domain. Running multiple apps on a single domain with microfrontends To extend the community forum beyond the limits of Discourse, we configured with the outer host domain using a vertical microfrontend approach. Vercel's microfrontends allow you to mount different Vercel projects to different route paths. We added a microfrontends.json file that directs traffic for specific routes to separate Vercel projects. Additional pages can be added incrementally, route by route. We also added the .well-known/workflow route to use Workflow Development Kit for event creation and video processing. While you could accomplish some of this by using negative matching in the proxy regex to avoid proxying certain routes, splitting the projects provides better isolation. This approach allows for independent environment variables and organization permissions, locking down the project that talks to the third-party host. A modern CDN without a massive migration At this point, you have Vercel's CDN standing between your users and your origin server. All traffic flows through Vercel's global network, giving you enterprise-grade security without touching your existing application. You get even more flexibility when you combine this with microfrontends. You now have a path to modernize your application incrementally. Instead of a "big bang" refactor, you can create a Next.js application and turn on specific routes one by one, while your core application continues to run on Discourse, WordPress, or whatever platform it is built on. This architecture unlocks a pragmatic path forward: secure your existing investment with Vercel's CDN today, then layer modern features on top tomorrow, all without the risk of a full platform migration. Learn more by reading the Vercel microfrontends documentation or see it in action at community.vercel.com/live.

    Jacob Paris

    Latest news.

  • Security
    Mar 10

    How we run Vercel's CDN in front of Discourse

    Vercel's CDN can front any application, not just those deployed natively on the platform, and it's simple to set up. This allows you to add firewall protection, DDoS mitigation, and observability to platforms like Discourse or WordPress without migrating them completely. The Vercel Community is an example of this architecture. It is a Discourse application hosted elsewhere, but we proxy it ourselves via Vercel's CDN, which both protects the app and gives us access to useful features in Vercel's website stack: Web Analytics gives us anonymized, cookie-free demographic and referrer data, so we can see where users are coming from and what they're looking for. Firewall gives us DDoS protection and has automatically prevented several attacks in the last year. Bot Management lets us block malicious scrapers while allowing trusted crawlers to index the forum and allow community posts to show up in ChatGPT searches. Some parts of the community platform, like Vercel Community live sessions, run directly on Vercel with Next.js. We use Vercel Microfrontends to mount a Next.js app on the same domain as the Discourse app, for three reasons: To create new pages that would be impractical to implement as CMS plugins. To overwrite existing Discourse pages that we can't fully customize. To keep users authenticated through Sign in with Vercel When the new pages are ready to launch, we add the path to our microfrontends configuration and users are rerouted seamlessly on the next deploy. Vercel as a CDN To set up Vercel as a CDN proxy like this, you need two domains: Inner host: The origin server where the site is actually hosted. This might look like your-site.discourse.com Outer host: The Vercel project domain that users interact with, such as community.vercel.com Ensure that all links on the site and its canonical URLs use the outer domain. Once those are in place, create a new project on Vercel that deploys to the outer host. You can then use vercel.ts (formerly vercel.json) to rewrite traffic to the inner domain. Running multiple apps on a single domain with microfrontends To extend the community forum beyond the limits of Discourse, we configured with the outer host domain using a vertical microfrontend approach. Vercel's microfrontends allow you to mount different Vercel projects to different route paths. We added a microfrontends.json file that directs traffic for specific routes to separate Vercel projects. Additional pages can be added incrementally, route by route. We also added the .well-known/workflow route to use Workflow Development Kit for event creation and video processing. While you could accomplish some of this by using negative matching in the proxy regex to avoid proxying certain routes, splitting the projects provides better isolation. This approach allows for independent environment variables and organization permissions, locking down the project that talks to the third-party host. A modern CDN without a massive migration At this point, you have Vercel's CDN standing between your users and your origin server. All traffic flows through Vercel's global network, giving you enterprise-grade security without touching your existing application. You get even more flexibility when you combine this with microfrontends. You now have a path to modernize your application incrementally. Instead of a "big bang" refactor, you can create a Next.js application and turn on specific routes one by one, while your core application continues to run on Discourse, WordPress, or whatever platform it is built on. This architecture unlocks a pragmatic path forward: secure your existing investment with Vercel's CDN today, then layer modern features on top tomorrow, all without the risk of a full platform migration. Learn more by reading the Vercel microfrontends documentation or see it in action at community.vercel.com/live.

    Jacob Paris
  • Security
    Feb 24

    Security boundaries in agentic architectures

    Most agents today run generated code with full access to your secrets. As more agents adopt coding agent patterns, where they read filesystems, run shell commands, and generate code, they're becoming multi-component systems that each need a different level of trust. While most teams run all of these components in a single security context, because that's how the default tooling works, we recommend thinking about these security boundaries differently. Below we walk through: The actors in agentic systems Where security boundaries should go between them An architecture for running agent and generated code in separate contexts All agents are starting to look like coding agents More agents are adopting the coding agent architecture. These agents read and write to a filesystem. They run bash, Python, or similar programs to explore their environment. And increasingly, agents generate code to solve particular problems. Even agents that aren't marketed as "coding agents" use code generation as their most flexible tool. A customer support agent that generates and runs SQL to look up account data is using the same pattern, just pointed at a database instead of a filesystem. An agent that can write and execute a script can solve a broader class of problems than one limited to a fixed set of tool calls. What goes wrong without boundaries Consider an agent debugging a production issue. The agent reads a log file containing a crafted prompt injection. The injection tells the agent to write a script that sends the contents of ~/.ssh and ~/.aws/credentials to an external server. The agent generates the script, executes it, and the credentials are gone. This is the core risk of the coding agent pattern. Prompt injection gives attackers influence over the agent, and code execution turns that influence into arbitrary actions on your infrastructure. The agent can be tricked into exfiltrating data from the agent's own context, generating malicious software, or both. That malicious software can steal credentials, delete data, or compromise any service reachable from the machine the agent runs on. The attack works because the agent, the code the agent generates, and the infrastructure all share the same level of access. To draw boundaries in the right places, you need to understand what these components are and what level of trust each one deserves. Four actors in an agentic system An agentic system has four distinct actors, each with a different trust level. Agent The agent is the LLM-driven runtime defined by its context, tools, and model. The agent runs inside an agent harness, which is the orchestration software, tools, and connections to external services that you build and deploy through a standard SDLC. You can trust the harness the same way you'd trust any backend service, but the agent itself is subject to prompt injection and unpredictable behavior. Information should be revealed on a need-to-know basis, i.e. an agent doesn't need to see database credentials to use a tool that executes SQL. Agent secrets Agent secrets are the credentials the system needs to function, including API tokens, database credentials, and SSH keys. The harness manages these responsibly, but they become dangerous when other components can access them directly. The entire architecture discussion below comes down to which components have a path to these secrets. Generated code execution The programs the agent creates and executes are the wildcard. Generated code can do anything the language runtime allows, which makes it the hardest actor to reason about. These programs may need credentials to talk to outside services, but giving generated code direct access to secrets means any prompt injection or model error can lead to credential theft. Filesystem The filesystem and broader environment are whatever the system runs on, whether a laptop, a VM, or a Kubernetes cluster. The environment can trust the harness, but it cannot trust the agent to have full access or run arbitrary programs without a security boundary. These four actors exist in every agentic system. The question is whether you draw security boundaries between them or let them all run in the same trust domain. A few design principles follow from these trust levels: The harness should never expose its own credentials to the agent directly The agent should access capabilities through scoped tool invocations, and those tools should be as narrow as possible. An agent performing support duties for a specific customer should receive a tool scoped to that customer's data, not a tool that accepts a customer ID parameter, since that parameter is subject to prompt injection. Generated programs that need their own credentials are a separate concern, which the architectures below address With these actors and principles in mind, here are the architectures we see in practice, ordered from least to most secure. Zero boundaries: today's default Coding agents like Claude Code and Cursor ship with sandboxes, but these are often off by default. In practice, many developers run agents with no security boundaries. In this architecture, there are no boundaries between any of the four actors. The agent, the agent's secrets, the filesystem, and generated code execution all share a single security context. On a developer's laptop, that means the agent can read .env files and SSH keys. On a server, it means access to environment variables, database credentials, and API tokens. Generated code can steal any of these, delete data, and reach any service the environment can reach. The harness may prompt the user for confirmation before certain actions, but there is no enforced boundary once a tool runs. Secret injection without sandboxing A secret injection proxy sits outside the main security boundary and intercepts outbound network traffic, injecting credentials only as requests travel to their intended endpoint. The harness configures the proxy with the credentials and the domain rules, but the generated code never sees the raw secret values. The proxy prevents exfiltration. Secrets can't be copied out of the execution context and reused elsewhere. But the proxy doesn't prevent misuse during active runtime. Generated software can still make unexpected API calls using the injected credentials while the system is running. Secret injection is a backward-compatible path from a zero-boundaries architecture. You can add the proxy without restructuring how components run. The tradeoff is that the agent and generated code still share the same security context for everything except the secrets themselves. Why sandboxing everything together isn't enough A natural instinct is to wrap the agent harness and the generated code in a shared VM or sandbox. A shared sandbox isolates both from the broader environment, and that's genuinely useful. Generated programs can't infiltrate the wider infrastructure. But in a shared sandbox, the agent and generated program still share the same security context. The generated code can still steal the harness's credentials or, if a secret injection proxy is in place, misuse credentials through the proxy. The sandbox protects the environment from the agent, but doesn't protect the agent from the agent's own generated code. Separating agent compute from sandbox compute The missing piece is running the agent harness and the programs the agent generates on independent compute, in separate VMs or sandboxes with distinct security contexts. The harness and the harness's secrets live in one context. The filesystem and generated code execution live in another, with no access to the agent's secrets. Both Claude Code and Cursor offer sandboxed execution modes today, but adoption in desktop environments has been low because sandboxing can cause compatibility issues. In the cloud, this separation is more practical. You can give the generated code a VM tailored for the type of software the agent needs to run, which can actually improve compatibility. In practice, this separation is a straightforward change. Agents perform tool invocations through an abstraction layer, and that abstraction makes it natural to route code execution to a separate environment without rewriting the agent itself. These two workloads have very different compute profiles, which means separating them lets you optimize each one independently. The agent harness spends most of its time waiting on LLM API responses. On Vercel, Fluid compute is a natural fit for this workload because billing pauses during I/O and only counts active CPU time, which keeps costs proportional to actual work rather than billing idle time. Generated code has the opposite profile. Agent-created programs are short-lived, unpredictable, and untrusted. Each execution needs a clean, isolated environment so that one program can't access secrets or state left behind by another. Sandbox products like Vercel Sandbox provide this through ephemeral Linux VMs that spin up per execution and are destroyed afterward. The VM boundary is what enforces the security context separation. Generated code inside the sandbox has no network path to the harness's secrets and no access to the host environment. The sandbox works in both directions. The sandbox shields the agent's secrets from generated code, and shields the broader environment from whatever the generated code does. Application sandbox with secret injection The strongest architecture combines the application sandbox with secret injection. The combination gives you two properties that neither achieves alone: Full isolation between the agent harness and generated programs, each running in their own security context No direct access to credentials for the generated code, which can use secrets through the injection proxy while running but can't read or exfiltrate them. Injected headers overwrite any headers the sandbox code sets with the same name, preventing credential substitution attacks. For production agentic systems, we recommend combining both. The agent harness runs as trusted software on standard compute. Generated code runs in an isolated sandbox. Secrets are injected at the network level, never exposed where generated code could access the secrets directly. This separation of agent compute from sandbox compute will become the standard architecture for agentic systems. Most teams haven't made this shift yet because the default tooling doesn't enforce it. The teams that draw these boundaries now will have a meaningful security advantage as agents take on more sensitive workloads. Safe secret injection is now available on Vercel Sandbox, read more in the documentation.

    Malte and Harpreet
  • Security
    Feb 3

    The Vercel OSS Bug Bounty program is now available

    Security is foundational to everything we build at Vercel. Our open source projects power millions of applications across the web, from small side projects to demanding production workloads at Fortune 500 companies. That responsibility drives us to keep investing in security for the platform and the broader ecosystem. Today, we're opening the Vercel Open Source Software (OSS) bug bounty program to the public on HackerOne. We're inviting security researchers everywhere to find vulnerabilities, challenge assumptions, and help us reduce risk for everyone building with these tools. Since August 2025, we've run a private bug bounty for our open source software with a small group of researchers. That program produced multiple high-severity reports across our Tier 1 projects and helped us refine our processes for triage, fixes, coordinated disclosure, and CVE publication. Now we're ready to expand. Building on our foundation of security investment Last fall, we opened a bug bounty program focused on Web Application Firewall and the React2Shell vulnerability class. Rather than wait for bypasses to surface in the wild, we took a proactive approach: pay security researchers to find them first. That program paid out over $1M across dozens of researchers who helped us find and fix vulnerabilities before attackers could. The lesson was clear. Good incentives and clear communication turn researchers into partners, not adversaries. Opening our private OSS bug bounty program to the public is the natural next step. Security vulnerabilities in these projects don't just affect Vercel; they affect everyone who builds with these tools. Finding and fixing them protects millions of end-users. Which projects are covered All Vercel open source projects are in scope. The projects listed below represent the core of the Vercel open source ecosystem. These are the frameworks, libraries, and tools that millions of developers rely on daily. Core projects included in the HackerOne program Project Description Next.js React framework for production web applications Nuxt Vue.js framework for modern web development SWR React Hooks library for data fetching Svelte Framework for building user interfaces Turborepo High-performance build system for monorepos AI SDK TypeScript toolkit for AI applications vercel (CLI) Command-line interface for Vercel platform workflow Durable workflow execution engine flags Feature flags SDK ms Tiny millisecond conversion utility nitrojs Universal server engine async-sema Semaphore for async operations skills The open agent skills tool: npx skills These are the projects where vulnerabilities have the highest potential impact, and where we prioritize incident response, vulnerability management, and CVE publication. How to participate If you’re a security researcher and ready to start hunting, visit HackerOne to find everything you need: scope details, reward ranges, and submission guidelines. When you find a vulnerability, submit it through HackerOne with clear reproduction steps. Our security team reviews every submission and works directly with researchers through the disclosure process. We're committed to fast response times and transparent communication. We appreciate the researchers who take the time to dig into our code and report issues responsibly. Your work helps keep these projects safer for everyone. Join our bug bounty program or learn more about security at Vercel.

    Andy Riancho
  • Security
    Dec 19

    Our $1 million hacker challenge for React2Shell

    In the weeks following React2Shell's disclosure, our firewall blocked over 6 million exploit attempts targeting deployments running vulnerable versions of Next.js, with 2.3 million in a single 24-hour period at peak. This was possible thanks to Seawall, the deep request inspection layer of the Vercel Web Application Firewall (WAF). We worked with 116 security researchers to find every WAF bypass they could, paying out over $1 million and shipping 20 unique updates to our WAF in 48 hours as new techniques were reported. The bypass techniques they discovered are now permanent additions to our firewall, protecting every deployment on the platform. But WAF rules are only the first line of defense. We are now disclosing for the first time an additional defense-in-depth against RCE on the Vercel platform that operates directly on the compute layer. Data from this defense-in-depth allows us to state with high confidence that the WAF was extraordinarily effective against exploitation of React2Shell. This post is about what we built to protect our customers and what it means for security on Vercel going forward.

    Malte Ubl
  • Security
    Nov 24

    Security through design: Creating the improved Firewall experience

    At Vercel, we believe security should be intuitive, not intimidating. The best security tool is the one that's actually used. It should be clear, useful, and never in the way. But that's not always the norm. Security tooling can often feel like a tradeoff against shipping velocity. When UX is an afterthought, teams leave tools off or in "logging mode" forever, even when risks are high. That's why we've redesigned the Vercel Firewall experience from the ground up. The new UI helps you see more, do more, and feel confident in your app's resilience to attacks. Designing for every Vercel user The redesign started with listening. Users told us: I want to easily see active DDoS events I need more information on what the Firewall blocked I need a faster way to investigate traffic alerts or spikes Developers, SREs, and security teams all use the Firewall for maintenance and troubleshooting. They configure rules, monitor traffic, and respond to unusual activity. The new Firewall UI is designed for everyone using Vercel. It surfaces clear, actionable information, simplifies navigation, and helps teams resolve issues quickly when it matters most. A better way to see and secure your traffic The new design brings together visibility, context, and control in one view. A redesigned overview page provides a unified, high-signal view of Firewall activity New sidebar navigation offers one click to Overview, Traffic, Rules, and Audit Log Key activity and alert feeds surface unusual patterns and potential threats Improved inspection tools make it faster to move from alert to insight A new overview for all security events The Overview page is your high-level control center for the Firewall. It gives you a clear, birds-eye view of your site’s security posture. The traffic chart remains at the top, and we now surface the most important information based on recent activity. Four tables surface key Firewall activity so you can see the current state and act quickly when needed: Alerts shows recently mitigated DDoS attacks Rules displays top rule activity by volume Events list mitigations taken by Firewall Denied IPs show blocked connections by client IP Comprehensive traffic intelligence The new Traffic page focuses entirely on understanding activity across your site. You can now drill down into the detection signals that you care about the most, and filter those signals based on specific mitigation actions on the traffic tab. These updates make it easier to spot patterns or anomalies before they become problems. We now surface dedicated feeds for: Top IPs Top JA4 digests Top AS names Top User Agents Top Request Paths Rules with most activity Dedicated rules and activity Firewall Rules now have a dedicated tab on the sidebar. You can see and manage all of your WAF custom rules in this view, including Bot Protection, Managed Rulesets, IP Blocking, and more. We’ve also moved the Audit Log to a dedicated tab for full visibility into Firewall changes. Faster event inspection Clicking an alert or event now opens a detailed view directly in the page. You can dive deeper into Firewall activity and investigate suspicious traffic or DDoS attacks without context switching, helping you diagnose issues faster and take action immediately. Security designed for you Security is usability. When tools are clear and well-designed, teams act faster and stay safer, without sacrificing shipping velocity. We'd love your feedback. Explore the new Firewall experience today in your Vercel Dashboard and share your thoughts in the Vercel Community.

    +3
    Sage, Liz, and 3 others
  • Security
    Aug 13

    The three types of AI bot traffic and how to handle them

    AI bot traffic is growing across the web. We track this in real-time, and the data reveals three types of AI-driven crawlers that often work independently but together create a discovery flywheel that many teams disrupt without realizing it. Not all bots are harmful. Crawlers have powered search engines for decades, and we've spent just as long optimizing for them. Now, large language models (LLMs) need training data, and the AI tools built on them need timely, relevant updates. This is the next wave of discoverability and getting it right from the start can determine whether AI becomes a growth channel or a missed opportunity. Blocking AI crawlers today is like blocking search engines in the early days and then wondering why organic traffic vanishes. As users shift from Googling for web pages to prompting for direct answers and cited sources, the advantage will go to sites that understand each type of bot and choose where access creates value.

    Kevin Corbett
  • Security
    May 23

    Vercel security roundup: improved bot defenses, DoS mitigations, and insights

    Since February, Vercel blocked 148 billion malicious requests from 108 million unique IP addresses. Every deployment automatically inherits these protections, keeping your workloads secure by default and enabling your team to focus on shipping rather than incidents. Our real-time DDoS filtering, managed Web Application Firewall (WAF), and enhanced visibility ensure consistent, proactive security. Here's what's new since February.

    Liz and Kevin
  • Security
    Apr 23

    Bot Protection: One-click managed ruleset now in public beta

    The Vercel Web Application Firewall (WAF) inspects billions of requests every day to block application-layer threats, such as cross-site scripting, traversal, and application DDoS attacks. While we already inspect and block malicious bot traffic, we wanted to provide better, more precise controls to fine tune your application security. Today, we're launching the Bot Protection managed ruleset, free for all users on all plans. With a single click, you can protect your application from bot attacks.

    Malavika and Liz
  • Security
    Apr 7

    Protectd: Evolving Vercel’s always-on denial-of-service mitigations

    Securing web applications is core to the Vercel platform. It’s built into every request, every deployment, every layer of our infrastructure. Our always-on Denial-of-Service (DoS) mitigations have long run by default—silently blocking attacks before they ever reach your applications. Last year, we made those always-on mitigations visible with the release of the Vercel Firewall, which allows you to inspect traffic, apply custom rules, and understand how the platform defends your deployments. Now, we’re introducing Protectd, our next-generation real-time security engine. Running across all deployments, Protectd reduces mitigation times for novel DoS attacks by over tenfold, delivering faster, more adaptive protection against emerging threats. Let's take a closer look at how Protectd extends the Vercel Firewall by continuously mapping complex relationships between traffic attributes, analyzing, and learning from patterns to predict and block attacks.

    Casey and Joe
  • Security
    Aug 2

    Protecting your app (and wallet) against malicious traffic

    Let's explore how to block traffic with the Firewall, set up soft and hard spend limits, apply code-level optimizations, and more to protect your app against bad actors. If you’re on our free tier, you don’t need to worry. When your app passes the included free usage, it is automatically paused and never charged. Configurable Firewall rules You can create custom Firewall rules to log, block, or challenge traffic to your app. Custom rules are available on all plans for no additional charge. Rules can be based on 15+ fields including request path, user agent, IP address, JA4 fingerprint, geolocation, HTTP headers, and even target path. Blocking traffic based on IP address For example, let’s say you notice some strange traffic from a single IP: You can create a custom rule to deny traffic from this IP addre...

    Lee Robinson
  • Security
    Dec 19

    Deployment Protection: Added security controls now available on all plans

    Today we're thrilled to announce added privacy controls across all plans, including the ability to secure their preview deployments behind Vercel Authentication with just one click. Protect previews for free across all plans with Deployment Protection New ways to access and collaborate with Shareable Links and Vercel Authentication Advanced Deployment Protection: Better E2E testing, private Production Deployments, and Password Protection Alongside this announcement, we're adding a new set of features that make it frictionless for team members and external collaborators to work together on protected previews: Shareable Links, and Advanced Deployment Protection, now availabl...

    +3
    Kit, Balazs, and 3 others

Ready to deploy? Start building with a free account. Speak to an expert for your Pro or Enterprise needs.

Start Deploying
Talk to an Expert

Explore Vercel Enterprise with an interactive product tour, trial, or a personalized demo.

Explore Enterprise

Get Started

  • Templates
  • Supported frameworks
  • Marketplace
  • Domains

Build

  • Next.js on Vercel
  • Turborepo
  • v0

Scale

  • Content delivery network
  • Fluid compute
  • CI/CD
  • Observability
  • AI GatewayNew
  • Vercel AgentNew

Secure

  • Platform security
  • Web Application Firewall
  • Bot management
  • BotID
  • SandboxNew

Resources

  • Pricing
  • Customers
  • Enterprise
  • Articles
  • Startups
  • Solution partners

Learn

  • Docs
  • Blog
  • Changelog
  • Knowledge Base
  • Academy
  • Community

Frameworks

  • Next.js
  • Nuxt
  • Svelte
  • Nitro
  • Turbo

SDKs

  • AI SDK
  • Workflow DevKitNew
  • Flags SDK
  • Chat SDK
  • Streamdown AINew

Use Cases

  • Composable commerce
  • Multi-tenant platforms
  • Web apps
  • Marketing sites
  • Platform engineers
  • Design engineers

Company

  • About
  • Careers
  • Help
  • Press
  • Legal
  • Privacy Policy

Community

  • Open source program
  • Events
  • Shipped on Vercel
  • GitHub
  • LinkedIn
  • X
  • YouTube

Loading status…

Select a display theme:
v0

Build applications with AI

AI SDK

The AI Toolkit for TypeScript

AI Gateway

One endpoint, all your models

Vercel Agent

An agent that knows your stack

Sandbox

AI workflows in live environments

CI/CD

Helping teams ship 6× faster

Content Delivery

Fast, scalable, and reliable

Fluid Compute

Servers, in serverless form

Observability

Trace every step

Bot Management

Scalable bot protection

BotID

Invisible CAPTCHA

Platform Security

DDoS Protection, Firewall

Web Application Firewall

Granular, custom protection

Customers

Trusted by the best teams

Blog

The latest posts and changes

Changelog

See what shipped

Press

Read the latest news

Events

Join us at an event

Docs

Vercel documentation

Academy

Linear courses to level up

Knowledge Base

Find help quickly

Community

Join the conversation

Next.js

The native Next.js platform

Nuxt

The progressive web framework

Svelte

The web’s efficient UI framework

Turborepo

Speed with Enterprise scale

AI Apps

Deploy at the speed of AI

Composable Commerce

Power storefronts that convert

Marketing Sites

Launch campaigns fast

Multi-tenant Platforms

Scale apps with one codebase

Web Apps

Ship features, not infrastructure

Marketplace

Extend and automate workflows

Templates

Jumpstart app development

Partner Finder

Get help from solution partners

Platform Engineers

Automate away repetition

Design Engineers

Deploy for every idea