Reference
12 min read

Usage in the Dashboard

Learn how to read the Usage tab on your dashboard to gain insight into how well your projects are performing, and take necessary actions for improvements.
Table of Contents

The Usage tab on the dashboard provides detailed insight into the actual resource usage of your projects. From here you can gain insights into how well your projects are performing and then take the necessary actions for improvements.

Usage tab on the dashboard.
Usage tab on the dashboard.

The available resources are displayed on the left of the screen. The data for each section can be filtered by:

  • Period: The period range for the data. The default is your current billing cycle. the following options are available:
    • Current billing cycle
    • Previous billing cycle
    • Last 7 days
    • Last 14 days
    • Last 30 days
    • Last 3 months
    • Last 12 months
  • Date range: The date range for the data. The default is the last month
  • Project: The project to view data for. The default is all projects

The Networking section shows the following charts:

Top Paths displays the paths that consume the most resources on your team. This functionality allows you to optimize your website by providing enhanced insights about assets, invocations, and requests consuming the most data over time.

With Top Paths, you can apply filters to query a particular date range or a specific project. In compact view, you can see the top ten paths consuming the most bandwidth on your projects. selecting the Explore button expands the section to a full page, allowing your team to see more paths as well as providing the ability to download a CSV file and share the view with other team members. The retention limit for querying data is 90 days.

Top Paths lets you view the top resources by:

  1. request_path: The exact path that was requested
  2. source_path: The mapping of the path that was requested

The following example shows the difference between the two path types. Assuming you have an application with the following dynamic route /blog/[slug], when someone makes a request to /blog/100, the path types would be:

  1. request_path: /blog/100
  2. source_path: /blog/[slug]

Top Paths and the Monitoring tab both help you visualize the consumed resources of your team's projects. You can select any of the usage paths in Bandwidth, Execution, Invocations, and Requests to navigate the Monitoring tab query editor and gain insights on that particular path.

Bandwidth is the amount of data your deployments have sent or received. This chart includes traffic for both preview and production deployments. You can group usage by your top four projects.

You will not be billed for bandwidth usage on blocked or paused deployments.

The total traffic of your projects is the sum of the outgoing and incoming bandwidth.

  • Outgoing: Outgoing bandwidth measures the amount of data that your deployments have sent to your users. Data used by ISR and the responses from the Edge Network and Serverless Functions are collected as outgoing bandwidth
  • Incoming: Incoming bandwidth measures the amount of data that your deployments have received from your users

An example of incoming bandwidth would be page views that are requested by the browser. All the requests sent to the Edge Network and Serverless Functions are collected as incoming bandwidth.

Incoming bandwidth is usually much smaller than outgoing bandwidth for website projects.

The number of cached and uncached requests that your deployments have received.

Similar to Bandwidth, the Requests chart includes requests for both preview and production deployments.

  • Cached: If a request is served by the Vercel Edge Network cache, it's considered to be a cached request
  • Uncached: If a request isn't served by the cache and hits the origin instead, it is not cached. In addition, under some specific conditions the request can never be cached. These numbers are considered as uncached requests

As a Vercel customer, you will be billed for both cached and uncached requests.

The Data cache section shows the following charts:

The data cache overview chart shows the usage from fetch requests, these are divided into:

  • Hits: Percentage of fetch requests to origins that result in a hit
  • Misses: Percentage of fetch requests to origins that result in a miss
  • Requests: Number of requests to any unique path
  • Bandwidth: Amount of data transferred from any unique path

The data cache bandwidth chart shows the amount of data that Vercel Data Cache has received or sent for your projects. The data can be filtered by Ratio, or by Projects.

The data cache revalidations chart shows the number of revalidation requests that Vercel Data Cache has received for your projects. The data can be filtered by Ratio, or by Projects.

The Serverless Functions section shows the following charts:

The number of times your Serverless Functions received a request. This does not include Cache Hits.

  • Successfully: All invocations that finished successfully by running your Serverless Function
  • Errored: All invocations where your Serverless Function failed (returned an HTTP 5xx status code) before it finished or times where your Function couldn't be invoked due to an unexpected error
  • Timeout: If your Serverless Function was invoked but didn't return before it reached its execution timeout, it'll be counted as a timeout

When using Incremental Static Regeneration with Next.js, both the revalidate option for getStaticProps and fallback for getStaticPaths will result in a Function invocation.

The amount of time your Serverless Functions have spent computing responses to the requests they’ve received. The value is given in GB-Hours, which is the memory allocated for each Function in GB, multiplied by the time in hours they were running. By default, Functions are allocated 1GB of memory, but can be configured to use more.

  • Completed: The execution time for Functions that were executed and finished successfully
  • Errored: The execution time for Functions that were executed but failed (returned an HTTP 5xx status code). The value represents the GB-Hours from when they started until they failed
  • Timeout: The execution time for Functions that were executed but didn't finish before they reached their maximum duration

For example:

  • If a function is configured to use 1GB of memory and executes for 1 second, this would be billed at 1 GB-s, requiring 3,600 executions in order to reach a full GB-Hr
  • If a function is configured to use 3GB of memory that executes for 1 second, this would be billed at 3 GB-s, requiring 1,200 executions to reach a full GB-Hr

The majority of Serverless Functions will execute for a much shorter duration with a minimal billed duration of 1ms.

The number of times that a request to your Functions could not be served because the concurrency limit was hit.

The Edge Functions section shows the following charts:

The number of times your Edge Functions received a request.

  • Successfully: All invocations that finished successfully by running an Edge Function
  • Errored: All invocations where an Edge Function failed (returned an HTTP 5xx status code) before it finished or times where your Function couldn't be invoked due to an unexpected error
  • Timeout: If an Edge Function was invoked but didn't return before it reached its maximum initial response time, it'll be counted as a timeout

The number of execution units that your Edge Functions have used.

Execution units are shown for all Edge Functions in all projects within your team. An execution unit is 50 ms of CPU time.

Each invocation of an Edge Function will have a Total CPU time, which is the time spent actually running your code. This is unlike Serverless Functions, which measure usage based on the entire time spent running your function (the "wall clock" time). This means that the Edge Functions usage does not measure time spent waiting for data fetches to return.

When it comes to billing, we'll work out the number of execution units (total CPU time of the invocation / 50ms) used for each invocation. You will then be charged based on anything over the limit.

For example:

If your function is invoked 250,000 times and uses 350 ms of CPU time at each invocation, then the function will incur (350 ms / 50 ms) = 7 execution units each time the function is invoked. Your usage is: 250,000 * 7 = 1,750,000 execution units

The time your Edge Functions have spent computing responses to requests. There is no time limit on the amount of CPU time your Edge Function can use during a single invocation. However, you are charged for each execution unit, which is based on the compute time. The compute time refers to the actual net CPU time used, not the execution time. Operations such as network access do not count towards the CPU time.

CPU time can be viewed in two ways:

  • Average - This shows the average time for computation across all projects using Edge Functions within your Team. You can hover over the line to see an average for each project on any chosen day.
  • Project - This shows the total time each project using Edge Functions within your Team has spent computing responses to requests.

The Edge Middleware section shows the following charts:

The number of times your Middleware received a request.

  • Successfully: All invocations that finished successfully by running your Middleware
  • Errored: All invocations where your Middleware failed (returned an HTTP 5xx status code) before it finished or times where your Middleware couldn't be invoked due to an unexpected error
  • Timeout: If your Middleware was invoked but didn't return before it reached its maximum initial response time, it'll be counted as a timeout

The time your Middleware has spent computing responses to requests.

CPU time can be viewed in two ways:

  • Average - This shows the average time for computation across all projects using Middleware within your team. You can hover over the line to see an average for each project on any chosen day. The fair use guidelines denotes an average CPU time limit of 50ms/invocation within a one hour period across all of your team's projects
  • Project - This shows the total time each project using Middleware within your team has spent computing responses to requests

The Edge Config section shows the following charts:

The Reads chart shows the number of times your Edge Config has been read. The data can be filtered by Count or Projects.

The Writes chart shows the number of times your Edge Configs were updated. The data can be filtered by Count or Edge Configs.

The Monitoring section shows the following charts:

The Data points chart shows the number of individual points of data that were reported each time a request is made to your website. The data can be filtered by Count or Projects.

The Builds section shows the following charts:

The amount of time that your Deployments have spent being queued or building.

  • Build Time: The amount of time it took your Deployments to get from the building state to a final state
  • Queued Time: The amount of time it took your Deployments to get from creation to building

How many times a build was issued for one of your Deployments.

  • Completed: All builds that successfully completed or were cancelled
  • Errored: All builds that failed or timed out

Artifacts are blobs of data or files that are uploaded and downloaded using the Vercel Remote Cache API. Uploaded artifacts can be downloaded during your build and by your team members.

During deployment on Vercel, your build has access to an environment variable VERCEL_ARTIFACTS_TOKEN that can be used as the Bearer token for requests to the Remote Cache API. Otherwise, you may use a Vercel Access Token for authorization.

Uploaded artifacts on Vercel automatically expire after 7 days.

The Artifacts section shows the following charts:

Artifacts are annotated with a task duration, which is the time required to generate the artifact. The time saved is the sum of that task duration for each artifact multiplied by the number of times that artifact was reused from a cache.

  • Remote Cache: The time saved by using artifacts cached on the Vercel Remote Cache API
  • Local Cache: The time saved by using artifacts cached on your local filesystem cache
  • Uploaded: The number of uploaded artifacts using the Remote Cache API
  • Downloaded: The number of downloaded artifacts using the Remote Cache API
  • Uploaded: The size of uploaded artifacts using the Remote Cache API
  • Downloaded: The size of downloaded artifacts using the Remote Cache API

Multiple uploads or downloads of the same artifact are counted as distinct events when calculating these sizes.

The Postgres section shows the following charts:

The Compute time chart shows the amount of CPU hours your Postgres store have spent computing responses to requests. The data is filtered by Count.

The Data storage chart shows the amount of data stored in all your Postgres stores. The data is filtered by Average.

The Data transfer chart shows the amount of data transferred between your Postgres databases and your compute endpoints. The data is filtered by Average.

The Written data chart shows the amount of data written to all your Postgres databases. The data is filtered by Count.

The Databases chart shows the number of Postgres databases you have created. The data is filtered by Count.

The KV section shows the following charts:

The Requests chart shows the number of requests made to your KV stores. The data is filtered by Count.

The Data transfer chart shows the amount of data transferred between your KV stores and your compute endpoints. The data is filtered by Count.

The Storage chart shows the amount of data stored in all your KV stores. The data is filtered by Average.

The Databases chart shows the number of KV databases (including read replicas) you have created. The data is filtered by Count.

The Web Analytics section shows the following charts:

The Events chart shows the number of page views and custom events that were tracked across all of your projects. The data is filtered by Count or Projects.

The Speed Insights section shows the following charts:

The number of individual points of data that were reported from your visitor’s browsers for the Speed Insights feature. The data is filtered by Count.

The Image Optimization section shows the following charts:

The number of source images. A source image is the original, unaltered image determined by the src prop. If the same source image is used multiple times with different transformations, it is only counted once for the current billing period.

See the relevant docs for more information on billing and limits. For tips on ensuring efficient use of Image Optimization, see Managing Image Optimization costs.

Last updated on February 13, 2023