vercel-logotype Logovercel-logotype Logo
    • AI Cloud
      • AI SDK

        The AI Toolkit for TypeScript

      • AI Gateway

        One endpoint, all your models

      • Vercel Agent

        An agent that knows your stack

    • Core Platform
      • CI/CD

        Helping teams ship 6× faster

      • Content Delivery

        Fast, scalable, and reliable

      • Fluid Compute

        Servers, in serverless form

      • Observability

        Trace every step

    • Security
      • Bot Management

        Scalable bot protection

      • BotID

        Invisible CAPTCHA

      • Platform Security

        DDoS Protection, Firewall

      • Web Application Firewall

        Granular, custom protection

    • Company
      • Customers

        Trusted by the best teams

      • Blog

        The latest posts and changes

      • Changelog

        See what shipped

      • Press

        Read the latest news

      • Events

        Join us at an event

    • Open Source
      • Next.js

        The native Next.js platform

      • Nuxt

        The progressive web framework

      • Svelte

        The web's efficient UI framework

      • Turborepo

        Speed with Enterprise scale

    • Tools
      • Academy

        Learn the ins and outs of Vercel

      • Marketplace

        Extend and automate workflows

      • Templates

        Jumpstart app development

      • Guides

        Find help quickly

      • Partner Finder

        Get help from solution partners

    • Use Cases
      • AI Apps

        Deploy at the speed of AI

      • Composable Commerce

        Power storefronts that convert

      • Marketing Sites

        Launch campaigns fast

      • Multi-tenant Platforms

        Scale apps with one codebase

      • Web Apps

        Ship features, not infrastructure

    • Users
      • Platform Engineers

        Automate away repetition

      • Design Engineers

        Deploy for every idea

  • Enterprise
  • Docs
  • Pricing
  • All Posts
  • Engineering
  • Community
  • Company News
  • Customers
  • v0
  • Changelog
  • Press
  • No results found for "".
    Try again with a different keyword.

    Featured articles

  • Sep 9

    A more flexible Pro plan for modern teams

    We’re updating Vercel’s Pro plan to better align with how modern teams collaborate, how applications consume infrastructure, and how workloads are evolving with AI. Concretely, we’re making the following changes: A flexible spending model: Instead of discrete included allotments across 20+ infra products, we’re transitioning to a flexible usage balance. Free Viewer seats: Teammates can access previews, analytics, and more without needing to pay for a seat. Self-serve Enterprise features: SAML SSO, HIPAA BAAs, and more are now available to everyone on the Pro plan. No need to contact sales. Better Spend Management: Spend Management is now enabled by default, to provide peace of mind against rare runaway u...

    Tom Occhino
  • Jul 10

    The AI Cloud: A unified platform for AI workloads

    For over a decade, Vercel has helped teams develop, preview, and ship everything from static sites to full-stack apps. That mission shaped the Frontend Cloud, now relied on by millions of developers and powering some of the largest sites and apps in the world. Now, AI is changing what and how we build. Interfaces are becoming conversations and workflows are becoming autonomous. We've seen this firsthand while building v0 and working with AI teams like Browserbase and Decagon. The pattern is clear: developers need expanded tools, new infrastructure primitives, and even more protections for their intelligent, agent-powered applications. At Vercel Ship, we introduced the AI Cloud: a unified platform that lets teams build AI features and apps with the right tools to stay flexible, move fast, and be secure, all while focusing on their products, not infrastructure.

    Dan Fein
  • Aug 21

    AI Gateway: Production-ready reliability for your AI apps

    Building an AI app can now take just minutes. With developer tools like the AI SDK, teams can build both AI frontends and backends that accept prompts and context, reason with an LLM, call actions, and stream back results. But going to production requires reliability and stability at scale. Teams that connect directly to a single LLM provider for inference create a fragile dependency: if that provider goes down or hits rate limits, so does the app. As AI workloads become mission-critical, the focus shifts from integration to reliability and consistent model access. Fortunately, there's a better way to run. AI Gateway, now generally available, ensures availability when a provider fails, avoiding low rate limits and providing consistent reliability for AI workloads. It's the same system that has powered v0.app for millions of users, now battle-tested, stable, and ready for production for our customers.

    Walter and Harpreet

    Latest news.

  • Company News
    Oct 15

    Agents at work, a partnership with Salesforce and Slack

    Every generation of software moves interfaces closer to where people think and work. Terminals gave way to GUIs. GUIs gave way to browsers. And now, the interface is language itself. Conversation has become the most natural way to build, explore, and decide. At the center of this shift is a new pattern: the AI agent. Today, software doesn’t have to wait for clicks or configuration, but understands user intent, reason about it, and takes action. The question for enterprises isn’t if they’ll adopt agents, but where those agents will live. Our answer: where work already happens. That’s why Vercel and Salesforce are partnering to help teams build, ship, and scale AI agents across the Salesforce ecosystem, starting with Slack. Together, we’re bringing the intelligence and flexibility of the Vercel AI Cloud to the places teams collaborate every day.

    Zack, Matt, and Dan
  • General
    Oct 15

    Running Next.js inside ChatGPT: A deep dive into native app integration

    When OpenAI announced the Apps SDK with Model Context Protocol (MCP) support, it opened the door to embedding web applications directly into ChatGPT. But there's a significant difference between serving static HTML in an iframe and running a full Next.js application with client-side navigation, React Server Components, and dynamic routing. This is the story of how we bridged that gap. We created a Next.js app that runs natively inside ChatGPT's triple-iframe architecture, complete with navigation and all the modern features you'd expect from a Next.js application.

    Andrew Qu
  • General
    Oct 15

    Talha Tariq joins Vercel as CTO of Security

    As AI reshapes how software is built and deployed, the surface area for attacks is growing rapidly. Developers are shipping faster than ever, and we’re seeing new code paths, new threat models, and new vulnerabilities. That’s why I’m excited to share that Talha Tariq is joining Vercel as our CTO of Security. Talha brings deep expertise in security at scale, having served as CISO & CIO at HashiCorp for seven years before becoming CTO (Security) at IBM following its acquisition. There, he oversaw security across all IBM divisions including software, AI, and post-quantum cryptography.

    Guillermo Rauch
  • General
    Oct 15

    Just another (Black) Friday

    For teams on Vercel, Black Friday is just another Friday. The scale changes, but your storefronts and apps stay fast, reliable, and ready for spikes in traffic. Many of the optimizations required for peak traffic are already built into the platform. Rendering happens at the edge, caching works automatically, and protection layers are on by default. What’s left for teams is refinement: confirming observability is set up, tightening security rules, and reviewing the dashboards that matter most. Last year, Vercel created a live Black Friday Cyber Monday dashboard that showcased our scale in real-time, showing the spikes. Overall, from Friday to Thursday, Vercel served 86,702,974,965 requests across its network, reaching a peak of 1,937,097 requests per second. Helly Hansen, a major technical apparel brand, entered the weekend with this confidence. Before the event, they moved from client-heavy rendering to Vercel’s CDN and saw:

    Sharon and Dan
  • General
    Oct 9

    Server rendering benchmarks: Fluid Compute and Cloudflare Workers

    Independent developer Theo Browne recently published comprehensive benchmarks comparing server-side rendering performance between Fluid compute and Cloudflare Workers. The tests measured 100 iterations across Next.js, React, SvelteKit, and other frameworks. The results showed that for compute-bound tasks, Fluid compute performed 1.2 to 5 times faster than Cloudflare Workers, with more consistent response times.

    Kevin, Dan, and Eric
  • Company News
    Sep 30

    Towards the AI Cloud: Our Series F

    Today, Vercel announced an important milestone: a Series F funding round valuing our company at $9.3 billion. The $300M investment is co-led by longtime partners at Accel and new investors at GIC, alongside other incredible supporters. We're also launching a ~$300M tender offer for certain early investors, employees, and former employees. To all the customers, investors, and Vercelians who have been on this journey with us: thank you.

    Guillermo Rauch
  • General
    Sep 29

    Collaborating with Anthropic on Claude Sonnet 4.5 to power intelligent coding agents

    Claude Sonnet 4.5 is now available on Vercel AI Gateway with full support in AI SDK. We’ve been testing the model in v0, across our Next.js build pipelines, and inside our new Coding Agent Platform template. The model shows improvements in design sensibility and code quality, with measurable gains when building and linting Next.js applications. Claude Sonnet 4.5 builds on Anthropic's strengths in reasoning and coding. When paired with the Vercel AI Cloud, it powers a new class of developer workflows where AI can plan, execute, and ship changes safely inside your repositories.

    Dan, Chris, and Harpreet
  • Engineering
    Sep 25

    Preventing the stampede: Request collapsing in the Vercel CDN

    When you deploy a Next.js app with Incremental Static Regeneration (ISR), pages get regenerated on-demand after their cache expires. ISR lets you get the performance benefits of static generation while keeping your content fresh. But there's a problem. When many users request the same ISR route at once and the cache is expired, each request can trigger its own function invocation. This is called a "cache stampede." It wastes compute, overloads your backend, and can cause downtime. The Vercel CDN now prevents this with request collapsing. When multiple requests hit the same uncached path, only one request per region invokes a function. The rest wait and get the cached response. Vercel automatically infers cacheability for each request through framework-defined infrastructure, configuring our globally distributed router. No manual configuration needed.

    Sachin Raja
  • General
    Sep 22

    BotID uncovers hidden SEO poisoning

    Your traffic is spiking and you spot suspicious bot activity in your logs. You deploy BotID expecting to find malicious scrapers, but the results show verified Google bots. Normal crawlers doing their job. But then you notice what they're actually searching for on your site. Queries that have nothing to do with your business. What do you do? This exact scenario recently played out at one of the largest financial institutions in the world. What they discovered was a years-old SEO attack still generating suspicious traffic patterns.

    Andrew and Kevin
  • Engineering
    Sep 19

    How we made global routing faster with Bloom filters

    Recently, we shipped an optimization to our global routing service that reduced its memory usage by 15%, improved time-to-first-byte (TTFB) from the 75th percentile and above by 10%, and significantly improved routing speeds for websites with many static paths. A small number of websites, with hundreds of thousands of static paths, were creating a bottleneck that slowed down our entire routing service. By replacing a slow JSON parsing operation with a Bloom filter, we brought path lookup latency down to nearly zero and improved performance for everyone.

    Matthew and Tim
  • v0
    Sep 18

    What you need to know about vibe coding

    In February 2025, Andrej Karpathy introduced the term vibe coding: a new way of coding with AI, “[where] you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” Just months later, vibe coding is completely reshaping how developers and non-developers work. Over 90% of U.S. developers use AI coding tools, adoption is accelerating for other roles, and English has become the fastest growing programming language in the world. We explore this shift in detail in our new State of Vibe Coding. Here are a few of the key takeaways.

    Zeb and Keith
  • General
    Sep 18

    Scale to one: How Fluid solves cold starts

    Cold starts have long been the Achilles’ heel of traditional serverless. It’s not just the delay itself, but when the delay happens. Cold starts happen when someone new discovers your app, when traffic is just starting to pick up, or during those critical first interactions that shape whether people stick around or convert. Traditional serverless platforms shut down inactive instances after a few minutes to save costs. But then when traffic returns, users are met with slow load times while new instances spin up. This is where developers would normally have to make a choice. Save money at the expense of unpredictable performance, or pay for always-on servers that increase costs and slow down scalability. But what if you didn't have to choose? That’s why we built a better way. Powered by Fluid compute, Vercel delivers zero cold starts for 99.37% of all requests. Fewer than one request in a hundred will ever experience a cold start. If they do happen, they are faster and shorter-lived than on a traditional serverless platform. Through a combination of platform-level optimizations, we've made cold starts a solved problem in practice. What follows is how that’s possible and why it works at every scale.

    Malte and Tom

Ready to deploy? Start building with a free account. Speak to an expert for your Pro or Enterprise needs.

Start Deploying
Talk to an Expert

Explore Vercel Enterprise with an interactive product tour, trial, or a personalized demo.

Explore Enterprise

Products

  • AI
  • Enterprise
  • Fluid Compute
  • Next.js
  • Observability
  • Previews
  • Rendering
  • Security
  • Turbo
  • Domains
  • v0

Resources

  • Community
  • Docs
  • Guides
  • Academy
  • Help
  • Integrations
  • Pricing
  • Resources
  • Solution Partners
  • Startups
  • Templates

Company

  • About
  • Blog
  • Careers
  • Changelog
  • Contact Us
  • Customers
  • Events
  • Partners
  • Shipped
  • Privacy Policy

Social

  • GitHub
  • LinkedIn
  • Twitter
  • YouTube

Loading status…

Select a display theme: