Reference
3 min read

Manage and optimize usage for Serverless Functions

Learn how to understand the different charts in the Vercel dashboard, how usage relates to billing, and how to optimize your usage of resources.
Table of Contents

This section details our improved infrastructure pricing. On April 25, 2024, these changes will apply to all new Pro customers. Starting May 25, 2024, current Pro customers will see these changes take effect on their next billing cycle. The Hobby tier remains free.

The Serverless Functions section shows the following charts:

Manage and Optimize pricing
Metric
Description
Priced
Optimize
Function InvocationsThe number of times your Functions have been invokedYesLearn More
Function DurationThe time your Serverless Functions have spent responding to requestsYesLearn More
ThrottlesInstances where requests to Functions are not served due to concurrency limitsNoN/A
Managed Infrastructure hobby and pro resources
Resource
Hobby Included
Pro Included
Pro Additional
Pro Price
First 100 GB-HoursFirst 1,000 GB-Hours1 GB-Hour$0.18
First 100,000First 1,000,0001,000,000 Invocations$0.60

You are charged based on the number of times your functions get invoked. This includes both successful and errored invocations, but does not include cache hits. The number of invocations is calculated based on the number of times your function gets called, regardless of the response status code.

When using Incremental Static Regeneration with Next.js, both the revalidate option for getStaticProps and fallback for getStaticPaths will result in a Function invocation on revalidation, not for every user request.

When viewing your Functions Invocations graph, you can group by Ratio to see a total of all invocations across your team's projects that finished successfully, errored, or timed out.

  • Use the Projects option to see the total number of invocations for each project within your team. This can help you identify which projects are using the most invocations and where you can optimize
  • Cache your responses using edge caching and Cache-Control headers. This can help reduce the number of invocations that your Functions receive and makes responses faster for users
  • See How can I reduce my Serverless Execution usage on Vercel? for more general information on how to reduce your Serverless Functions usage.

You are charged based on the amount of time your Serverless Functions have spent computing responses to the requests they’ve received. This is calculated in GB-Hours, which is the memory allocated for each Function in GB x the time in hours they were running.

For example:

  • If a function has 1GB of memory and executes for 1 second, this would be billed at 1 GB-s, requiring 3,600 executions in order to reach a full GB-Hr
  • If a function has 3GB of memory and executes for 1 second, this would be billed at 3 GB-s, requiring 1,200 executions to reach a full GB-Hr

You can use the Ratio option to see the total amount of execution time across all projects within your team, including the completions, errors, and timeouts.

  • Use the Projects option to see the total amount of execution time for each project within your team. This can help you identify which projects are using the most execution time and where you can optimize
  • By default, Functions get 1 vCPU of memory, but can be configured to use more. This may affect the duration of time your functions run
  • You can also adjust the maximum duration for your functions to prevent them from running for too long
  • To reduce the GB-hours (Execution) of your functions, ensure you are using edge caching and Cache-Control headers. If you are using Incremental Static Regeneration, note that Vercel counts Function invocations that happen on page revalidation towards both for GB-hours and bandwidth
  • For troubleshooting issues that may be causing your functions to run longer than expected or timeout, see What can I do about Vercel Serverless Functions timing out?

This counts the number of times that a request to your Functions could not be served because the concurrency limit was hit.

While this is not a chargeable metric, it will cause a 429: FUNCTION_RATE_LIMIT error. To learn more, see What should I do if I receive a 429 error on Vercel?.

Last updated on April 29, 2024