The Usage tab on the Dashboard provides detailed insight into the actual resource usage of your projects.
Gain insight into how well your projects are performing with the Usage page, and then take the necessary actions for improvements.
In the header, you can filter the metrics by a specific period of time, project, and function path.
The sections showing your metrics include: Networking, Functions, Edge Middleware, Builds, Artifacts, and Other. In each section, you can see related metric charts showing detailed data in different dimensions.
Top Paths displays the paths that consume the most resources on your Team. This functionality allows you to optimize your website by providing enhanced insights about assets, invocations, and requests consuming the most data over time.
With Top Paths, you can apply filters to query a particular date range or a specific Project. In compact view, you can see the top 10 paths consuming the most bandwidth on your projects. Clicking the "explore" button expands the section to a full page, allowing your Team to see more paths as well as providing the ability to download a CSV file and share the view with other team members. The retention limit for querying data is 90 days.
Scroll further down to study the Bandwidth trends that reflect the amount of data your Deployments have received or sent.
You can also group usage by your top four projects.
The feature functionality of the Usage Top Paths correlates with the Monitoring tab since both are involved with visualizing consumed resources on your Team. Clicking any of the usage paths for Bandwidth, Execution Time, Invocations, and Requests takes you to the Monitoring tab query editor since each URL is a query itself.
You can run and save that query from here to get metrics insights for that particular top path and utilize it for later use.
Outgoing and Incoming bandwidth can be used to measure the overall traffic of your projects.
- Outgoing: Outgoing bandwidth measures the amount of data that your Deployments have sent to your users. All the responses from the Edge Network and Serverless Functions are collected as Outgoing bandwidth.
- Incoming: Incoming bandwidth measures the amount of data that your Deployments have received from your users.
Usually for website projects, Incoming bandwidth will be much smaller than Outgoing bandwidth.
The number of Cached and Uncached requests that your Deployments have received.
- Cached: If a request is served by the cache of Vercel Edge Network, it's considered as a Cached request
- Uncached: If a request isn't served by the cache and hits the Origin instead, it is not cached. In addition, under some specific conditions the request can never be cached. These numbers are considered as Uncached requests
As a Vercel customer, you will be billed for both Cached and Uncached requests.
The number of times your Serverless Functions received a request. This metric does not include Cache Hits.
- Successfully: All invocations that finished successfully by running your Function
- Errored: All invocations where your Function failed before it finished or times where your Function couldn't be invoked due to an unexpected error
- Timeout: If your Function was invoked but didn't return before it reached its execution timeout, it'll be counted as a timeout
When using Incremental Static Regeneration with Next.js, both the
revalidate option for
getStaticPaths will result in a Function invocation.
The amount of time your Functions have spent computing responses to the requests they’ve received. The value is given in GB-Hours, which is the memory allocated for each Function in GB, multiplied by the time in hours they were running. By default, Functions are allocated 1GB of memory, but can be configured to use more.
- Completed: The execution time for Functions that were executed and finished successfully
- Errored: The execution time for Functions that were executed but failed. The value represents the GB-Hours from when they started until they failed
- Timeout: The execution time for Functions that were executed but didn't finish before they reached their execution timeout
- If a function is configured to use 1GB of memory and executes for 1 second, this would be billed at 1 GB-s, requiring 3,600 executions in order to reach a full GB-Hr
- If a function is configured to use 3GB of memory that executes for 1 second, this would be billed at 3 GB-s, requiring 1,200 executions to reach a full GB-Hr
The majority of Serverless Functions will execute for a much shorter duration with a minimal billed duration of
The number of times that a request to your Functions could not be served because the concurrency limit was hit.
The number of times your Middleware received a request.
- Successfully: All invocations that finished successfully by running your Middleware
- Errored: All invocations where your Middleware failed before it finished or times where your Middleware couldn't be invoked due to an unexpected error
- Timeout: If your Middleware was invoked but didn't return before it reached its execution timeout, it'll be counted as a timeout
The time your Middleware has spent computing responses to requests.
CPU utilization can be viewed in two ways:
- Average - This shows the average time for computation across all projects using Middleware within your Team. You can hover over the line to see an average for each project on any chosen day. The Fair Use Policy denotes an average CPU time limit of 50ms/invocation within a one hour period across all of your Team's projects.
- Project - This shows the total time each project using Middleware within your Team has spent computing responses to requests.
The amount of time that your Deployments have spent being queued or building.
- Build Time: The amount of time it took your Deployments to get from the building state to a final state
- Queued Time: The amount of time it took your Deployments to get from creation to building
How many times a build was issued for one of your Deployments.
- Completed: All builds that successfully completed or were cancelled
- Errored: All builds that failed or timed out
During deployment on Vercel, your build has access to an environment variable
VERCEL_ARTIFACTS_TOKEN that can be used as the
Bearer token for requests to the Remote Cache API. Otherwise, you may use a Vercel Access Token for authorization.
Uploaded artifacts on Vercel automatically expire after 7 days.
Artifacts are annotated with a task duration, which is the time required to generate the artifact. The Time Saved is the sum of that task duration for each artifact multiplied by the number of times that artifact was reused from a cache.
- Remote Cache: The time saved by using artifacts cached on the Vercel Remote Cache API.
- Local Cache: The time saved by using artifacts cached on your local filesystem cache.
- Uploaded: The number of uploaded artifacts using the Remote Cache API.
- Downloaded: The number of downloaded artifacts using the Remote Cache API.
- Uploaded: The size of uploaded artifacts using the Remote Cache API.
- Downloaded: The size of downloaded artifacts using the Remote Cache API.
Multiple uploads or downloads of the same artifact are counted as distinct events when calculating these sizes.
The number of individual points of data that were reported from your visitor’s browsers for the Analytics feature.
The number of Source Images. A Source Image is the original, unaltered image determined by the
src prop. If the same Source Image is used multiple times with different transformations, it is only counted once for the current billing period.