Skip to content
12 min read

One of the benefits of using Vercel is that your infrastructure scales as needed, depending on the demand from users. Using your preferred framework, you can take advantage of Vercel's framework-defined infrastructure to implement a function for tasks such as:

  • API Creation – Build APIs like user-specific data fetchers interacting with databases
  • Data Processing – Manage intensive tasks, such as image/video manipulation, without impeding application performance
  • Webhooks – Act as webhook endpoints, processing data from third-party services like GitHub

Vercel supports multiple runtimes for your functions. Each runtime has its own set of libraries, APIs, and functionality that provides different trade-offs and benefits.

Runtimes transform your source code into Functions, which are served by our Edge Network.

Runtime configuration is usually only necessary when you want to use the Edge Runtime.

Vercel supports these official Runtimes:

The Node.js Runtime takes an entrypoint of a Node.js function, builds its dependencies (if any) and bundles them into a Serverless Function.
The Edge Runtime is a lightweight runtime that exposes a set of Web Standard APIs that make sense on the server.
The Go Runtime takes in a Go program that defines a singular HTTP handler and outputs it as a Serverless Function.
The Python Runtime takes in a Python program that defines a singular HTTP handler and outputs it as a Serverless Function.
The Ruby Runtime takes in a Ruby program that defines a singular HTTP handler and outputs it as a Serverless Function.

If you would like to use a language that Vercel does not support by default, you can use a community runtime by setting the functions property in vercel.json. For more information on configuring other runtimes, see Configuring your function runtime.

The following community runtimes are recommended by Vercel:

You can create a community runtime by using the Runtime API. Alternatively, you can use the Build Output API.

A runtime can retain an archive of up to 100 MB of the filesystem at build time. The cache key is generated as a combination of:

  • Project name
  • Team ID or User ID
  • Entrypoint path (e.g., api/users/index.go)
  • Runtime identifier including version (e.g.: @vercel/go@0.0.1)

The cache will be invalidated if any of those items changes. You can bypass the cache by running vercel -f.

When using functions on Vercel, you can choose what runtime you want to use:

Usually, when writing TypeScript or JavaScript functions, you'll be deciding between the Node.js or Edge Runtime. The following sections provide information on the trade-offs and benefits of each.

Node.js-powered functions are suited to computationally intense or large functions and provide benefits like:

  • More RAM and CPU power – For computationally intense workloads, or functions that have bundles up to 250 MB in size, this runtime is ideal
  • Complete Node.js compatibility - The Node.js runtime offers access to all Node.js APIs, making it a powerful tool for many applications, although it may take them longer to boot than those using the Edge Runtime
In our documentation and this guide, we mention Serverless Functions. These are Node.js-powered Vercel Functions. To learn how to implement these functions, see the quickstart.

Edge Runtime-powered functions can be a more cost-effective, performant option and provide benefits like:

  • Lightweight with minimal cold starts - With a smaller API surface area and using V8 isolates, Edge Runtime-powered functions are fast. However, only a subset of Node.js APIs are exposed
  • Globally distributed by default – Vercel deploys all Edge Functions globally across its Edge Network, which means your site's visitors will get API responses from data centers geographically near them, typically reducing the overall response time
  • Pricing is based on compute time – You're charged for time processing any requests and not for your function is fetching data. This is ideal for querying databases or AI services that may have longer request times
In our documentation and this guide, we mention Edge Functions. These are Edge Runtime-powered Vercel Functions. To learn how to implement these functions, see the quickstart.
Serverless Functions (Node.js and more)
Edge Functions (Edge Runtime)
Node.js. Can also support Go, Ruby, Python
Deployed as region-first, can customize location.
Enterprise teams can set multiple regions
Deployed global-first, customizable to run as regional
Automatic failover to defined regions
Automatic global failover
Auto-scales up to 30,000 concurrency
Unlimited concurrency
V8 Isolate
No file system support
Supports x86_64 and arm64
Hobby: Framework-dependent*, Pro and Ent: No limit
No limit

Runtime is the environment in which your functions execute. Vercel supports several runtimes for Serverless Functions (Node.js, Go, Ruby, Python), while Edge Functions use the lightweight Edge Runtime.

This means that with Serverless Functions you have access to all Node.js APIs. With Edge Functions you get access to a subset of the most important browser APIs.

Cold starts refer to the delay that occurs when an inactive Function is invoked for the first time, as the function has to be initialized and loaded into memory.

When a function uses the lightweight Edge Runtime, it needs fewer resources to initialize and therefore can have faster cold starts than Node.js-powered functions.

Location refers to where your functions are executed. Serverless Functions are region-first, while Edge Functions are executed close to the end-users across Vercel's global network.

When you deploy Edge Functions, there are considerations you need to make about where it's deployed and executes. Edge Functions are executed globally and in a region close to the user's request. However, if your data source is geographically far from this request, any response will be slow. Because of this you can opt to execute your function closer to your data source.

Users on Enterprise plans can deploy Serverless Functions to multiple regions. On non-Enterprise plans, deploying to multiple regions will fail before entering the build step. Users on any plan can deploy Edge Functions to multiple regions.

Vercel's failover mode refers to the system's behavior when a function fails to execute because of data center downtime.

Vercel provides redundancy and automatic failover for Edge Functions to ensure high availability. For Serverless Functions, you can use the functionFailoverRegions configuration in your vercel.json file to specify which regions the function should automatically failover to.

The concurrency model on Vercel refers to how many instances of your functions can run simultaneously. All functions on Vercel scale automatically based on demand to manage increased traffic loads.

With automatic concurrency scaling, your Vercel Functions can scale to a maximum of 30,000, maintaining optimal performance during traffic surges. The scaling is based on the burst concurrency limit of 1000 concurrent executions per 10 seconds, per region.

Vercel's infrastructure monitors your usage and preemptively adjusts the concurrency limit to cater to growing traffic, allowing your applications to scale without your intervention.

Automatic concurrency scaling is available on all plans.

Burst concurrency refers to Vercel's ability to temporarily handle a sudden influx of traffic by allowing a higher concurrency limit.

Upon detecting a traffic spike, Vercel temporarily increases the concurrency limit to accommodate the additional load. The initial increase allows for a maximum of 1000 concurrent executions per 10 seconds, scaling up to a total of 30,000 concurrent executions, if necessary. After the traffic burst subsides, the concurrency limit gradually returns to its previous state, ensuring a smooth scaling experience.

The scaling process may take several minutes during traffic surges, especially substantial ones. While this delay aligns with natural traffic curves to minimize potential impact on your application's performance, it's advisable to monitor the scaling process for optimal operation.

You can monitor burst concurrency events using Log Drains, or Runtime Logs to help you understand and optimize your application's performance.

If you exceed the 30,000 limit, a 429 FUNCTION_RATE_LIMIT error will trigger. Alternatively, you can explore Edge Functions, which do not have concurrency limits.

In Vercel, the isolation boundary refers to the separation of individual instances of a function to ensure they don't interfere with each other. This provides a secure execution environment for each function.

With traditional serverless infrastructure, each function uses a MicroVM for isolation, which provides strong security but also makes them slower to start and more resource intensive. As the Edge Runtime is built on the V8 engine, it uses V8 isolates to separate just the runtime context, allowing for quick startup times and high performance.

Filesystem support refers to a function's ability to read and write to the filesystem. Serverless Functions have a read-only filesystem with writable /tmp scratch space up to 500 MB. Edge Functions do not have filesystem access due to their ephemeral nature.

Serverless Functions support the x86_64 and arm64 instruction sets.

Framework authors are responsible for setting the correct architecture based on the build system used when producing the function output.

Serverless Functions are archived when they are not invoked:

Archived functions will be unarchived when they're invoked, which can make the initial cold start time at least 1 second longer than usual.

Edge Functions are not archived.

When using Next.js or SvelteKit on Vercel, dynamic code (APIs, server-rendered pages, or dynamic fetch requests) will be bundled into the fewest number of Serverless Functions possible, to help reduce cold starts. Because of this, it's unlikely that you'll hit the limit of 12 bundled Serverless Functions per deployment.

When using other frameworks, or Serverless Functions directly without a framework, every API maps directly to one Serverless Function. For example, having five files inside api/ would create five Serverless Functions. For Hobby, this approach is limited to 12 Serverless Functions per deployment.

Serverless Functions (Node.js and more)
Edge Functions (Edge Runtime)
250 MB
Hobby: 1 MB, Pro: 2 MB, Ent: 4 MB
Hobby: 10s, Pro: 15s (default) - configurable up to 300s, Ent: 900s
25s (to begin returning a response, but can continue streaming data.)
Hobby: 1024 MB, Pro and Ent: 3008 MB
128 MB
64 KB
64 KB
4.5 MB
4 MB

Vercel places restrictions on the maximum size of the deployment bundle for functions to ensure that they execute in a timely manner.

For Serverless Functions, the maximum uncompressed size is 250 MB including layers which are automatically used depending on Runtimes. These limits are enforced by AWS.

You can use includeFiles and excludeFiles to specify items which may affect the function size, however the limits cannot be configured. These configurations are not supported in Next.js, instead use outputFileTracingIncludes.

Edge Functions have plan-dependent size limits, ensuring fast execution and eliminating cold starts. This is the total, compressed size of your function and its dependencies after bundling.

This refers to the longest time a function can process an HTTP request before responding.

Functions using the Edge Runtime do not have a maximum duration. They must begin sending a response within 25 seconds and can continue streaming a response beyond that time.

While Serverless Functions have a default duration, this duration can be extended using the maxDuration config. If a Serverless Function doesn't respond within the duration, a 504 error code (FUNCTION_INVOCATION_TIMEOUT) is returned.

Serverless Functions have the following defaults and maximum limits for the duration of a function:


Serverless Functions can use more memory and larger CPUs, but at the expense of cold starts. The maximum memory for a Serverless Function is 1024 MB on a Hobby plan, and up to 3008 MB on Pro and Enterprise plans, by using the functions property in your vercel.json.

Edge Functions have a fixed memory limit, ensuring minimal cold starts in a limited execution environment.

You can use environment variables to manage dynamic values and sensitive information affecting the operation of your functions. Vercel allows developers to define these variables either at deployment or during runtime.

You can use a total of 64 KB in environments variables per-deployment on Vercel. This limit is for all variables combined, and so no single variable can be larger than 64 KB.

In Vercel, the request body size is the maximum amount of data that can be included in the body of a request to a function. Edge Functions also have additional limits to the request size.

The maximum payload size for the request body or the response body of a Serverless Function is 4.5 MB. If a Serverless Function receives a payload in excess of the limit it will return an error 413: FUNCTION_PAYLOAD_TOO_LARGE. See How do I bypass the 4.5MB body size limit of Vercel Serverless Functions for more information.

Serverless Functions (Node.js and more)
Edge Functions (Edge Runtime)
Geolocation data
Access request headers
Cache responses

You can learn more about API support and writing functions:

Serverless Functions (Node.js and more)
Edge Functions (Edge Runtime)
Pay for wall-clock time
Pay for CPU time

The Hobby plan offers functions for free, within limits. The Pro plan extends these limits, billing Serverless Functions on wall-clock time and Edge Runtime-powered Functions on CPU time.

This aligns with the intended usage of each: Serverless for intermittent heavy tasks, and Edge for continuous low-latency tasks. The Edge Runtime is particularly well-suited for querying databases or other backends, like an AI service, since you will not be billed for the time making the request, only the time processing the response.

On paid plans, Serverless Functions cost $40 for an additional 100 GB-Hrs. Edge Functions cost $2.00 for each additional 1,000,000 execution units.

Edge Middleware can use no more than 50 ms of CPU time on average.

This limitation refers to actual net CPU time, which is the time spent performing calculations, not the total elapsed execution or "wall clock" time. For example, when you are blocked talking to the network, the time spent waiting for a response does not count toward CPU time limitations.

Serverless Functions (Node.js and more)
Edge Functions (Edge Runtime)
Not Supported
Supported, depending on the framework
Supported only in Node.js runtime
Not supported

Vercel's Secure Compute feature offers enhanced security for your Serverless Functions, including dedicated IP addresses and VPN options. This can be particularly important for functions that handle sensitive data.

Streaming refers to the ability to send or receive data in a continuous flow.

Both the Node.js and the Edge Runtime support streaming.

In addition, Serverless Functions have a maximum duration, meaning that it isn't possible to stream indefinitely. Edge Functions do not have a maximum duration, but you must send an initial response within 25 seconds. You can continue streaming a response beyond that time.

Cron jobs are time-based scheduling tools used to automate repetitive tasks. When a cron job is triggered through the cron expression, it calls a Vercel Function.

From your function, you can communicate with a choice of data stores. To ensure low-latency responses, it's crucial to have compute close to your databases. Always deploy your databases in regions closest to your functions to avoid long network roundtrips. For more information, see our best practices documentation.

An Edge Config is a global data store that enables experimentation with feature flags, A/B testing, critical redirects, and IP blocking. It enables you to read data at the edge without querying an external database or hitting upstream servers.

Vercel has an OpenTelemetry (OTEL) collector that allows you to send OTEL traces from your Serverless Functions to application performance monitoring (APM) vendors such as New Relic.

Last updated on February 6, 2023