Skip to content


Learn what Checks are and how you can use them in your Vercel workflow.

For a better build experience, Vercel automatically keeps an eye on various aspects of your web application. For example,

  • In case of an error during a deployment, Vercel collects, stores, and displays this info as Logs, which helps you resolve the issue.
  • If you encounter interactive issues like slow page loads or layout shifts, Vercel notifies you through its Analytics.

Such scenarios are taken care of by Vercel's Checks API. Vercel Checks auto-correct several fixes (like prettier and linting) and prevents deployment to Production if the error is more serious.

An example of Checks provided by an Integration.

Checks are tests and assertions that created and run after each deployment has been built. They are powered by Integrations, which allow you to connect any third-party service of your choice with Vercel.

You can build your own Integration in order to register any arbitrary Check for your Deployments. This integration can be for your own use cases, or publish it to the Integration Marketplace, so that other Teams on Vercel can start using it.

If the action performed by your Integration is specific to your use case, there is no need to publish your Integration — only if you'd like to offer your service to others too.

Each Check works as a webhook and gets triggered on specific events. Make sure your integration is listening to the deployment.created, deployment.ready, and deployment.succeeded events. Learn more about these from the Supported Webhooks Events docs.

If a check is set as re-requestable, your integration users will see the option to re-request and re-run Checks that have failed.

All Vercel Integrations offer Checks. You can build your own Checks Integration in order to register any arbitrary Check for your Deployments.

This custom Check will work exclusively on your use cases. If you want other Teams to start using it, you need to get that listed on the marketplace.

E.g., if you'd like to perform Reliability or Performance Tests (including custom End-to-End-Tests) with Checks, Vercel recommends the Checkly Integration.

For every new Deployment, Vercel uses its source files to understand which parts of your app have changed and avoid compiling pages that have already been compiled in previous Deployments.

When building Vercel Integrations, you can actually decide how Checks act on different parts of your application with every Deployment.

This happens because Integrations provide their paths as a part of the Check's results.

An example of two Checks registered for two different pages.

Depending on which Integration you've added and how you've configured it, you might see different amounts of Checks appear for your Deployments. If one of your Deployment's pages isn't listed in the Checks, please refer to the settings of the third-party service you've integrated with.

Once a Check has begun running, it might choose to reveal a link on which you can view more details about its progress and/or its completion results:

The button for viewing more details of a particular Check.

As you can see, the link will be accessible from the right side of every Check if it is present, and will point you to an external page outside the Vercel dashboard, provided by the third-party service that you've integrated with.

If you've added an Integration to your Personal Account or a Team that supports Checks, the Integration will be notified every time a new Deployment is created.

It's up to the Integration to decide when to reveal a Check. Ideally, an Integration can register for a new Check before your Deployment reaches the "Running Checks" step.

Once your Deployment has reached the "Running Checks" stage, all the Integrations that registered Checks for the Deployment will be notified simultaneously, and will perform their particular actions on it.

Since different services might take different amounts of time to perform their actions, the results will likely arrive at non-identical times. Once all the Checks are performed, all the third-party services behind the Integrations will report results back to the platform.

When all the Checks have reported a result, the "Running Checks" step will be considered complete, and the Deployment will proceed to the finishing stage.

When the "Running Checks" step starts, only the Automatic Deployment URL will be available, which is therefore also the URL that Integrations use to perform their actions on your Deployment.

Other URLs like the Automatic Branch URL and Custom Domains defined for your Project will be applied after the step has finished.

Once a Check has finished running, it summarizes how it was completed and displays different results within the "Running Checks" step.

The most important piece of information is the Status that's displayed on the left side of every Check.

It can have one of the following values:

Hasn't started running yet.
Currently running.
Completed running successfully.
Completed running a non-blocking failure.
Completed running with a failure.
Prevented from continuing to run on the third-party service (usually manually).
Deemed to no longer be relevant by the third-party service and stopped.
Registered, but didn't update within the Maximum Duration.

All statuses, except Pending and Running, are final. This means that, once the status is set, it's impossible for them to change again in the future on the specific Deployment that they have finished running on.

For a Failed or Stale status, the "Running Checks" step is marked as failed, and the Deployment will not finish until you skip this check:

An example of the "Running Checks" step having failed.

If you want your Check to never fail despite the status type, you can mark your Check as "non-blocking" at the time of registering it. Instead of failure, it will be marked as a warning.

This setting is normally set as an option within your third-party integration. Usually, if you do not choose to block a deployment based on a failed check, it will appear in Vercel as a warning. For information on how to do this for Checkly, see their documentation.

You can filter on a specific status to make it easier to see only failing checks, for example. To filter by status, click on that status in the toggle:

An example of the Checks filter option

The following options can be toggled:

  • Success: This represents any checks that have passed
  • Warning: This represents any checks that have failed, but have been deemed "non-blocking" in your integration. See Working with failed tests for more information.
  • Failure: This represents any checks that have failed, but have been deemed "blocking" in your integration. See Working with failed tests for more information.

A Check is also capable to display the performance of your Deployment, by showing a Virtual Experience Score (VES) as part of its results. Such kind of Check can rate your app's user experience on a scale from 1 (worst) to 100 (best).

Integrations will determine your Virtual Experience Score by evaluating your application's performance in a simulated environment. This means that, unlike with Analytics, there are no real users involved yet.

Instead, you can safely assert the performance of your changes, before you make them available to visitors, and then confirm the Real Experience Score with Analytics.

To understand why your Virtual Experience Score is good or bad, Web Vitals generate four other sub scores which break down your Deployment's performance into more granular detail.

Each of these metrics represent a particular area of performance in which your application should rank in an excellent manner, to ultimately provide the best user experience possible, before the Deployment is made available to real visitors:

An example of Web Vitals Scores being displayed as the result of a Check.

The Web Vitals displayed are the same that Analytics provides, with the only exception that First Input Delay (FID) is replaced with Total Blocking Time (TBT), as recommended by Google, because the former does not apply in a simulated environment.

Your Integration will decide whether to assert your Web Vitals on desktop or mobile.

The Virtual Experience Score is calculated as a weighed average of the Web Vitals, which means that its value is derived from the value of the other four scores.

Additional technical information about Web Vital Scores

If the Web Vital Scores happen to change in such a way that the Virtual Experience Score changes in a meaningful manner, the improvement or regression will be reflected below it, including its previous value:

An example of a Virtual Experience Score neither having improved, nor regressed.

The scores of the individual Web Vitals, on the other hand, are derived from their underlying metric value, which is displayed below them:

An example of a Web Vital Score having improved.

Next to the metric value, its evolution compared to your last change is displayed.

One of the following symbols will be used to indicate the type of change (the symbols will be colored to match their respective Web Vital Scores:

The Web Vital's metric value has gotten worse (increased).
The Web Vital's metric value has improved (decreased).
The Web Vital's metric value has remained the same.

Depending on which Integration is used, the values used for comparison might be different ones, but in the case of the Checkly Integration, Preview Deployments are always compared to the last Production Deployment, and Production Deployments are compared to the last Production Deployment.

If a Check was registered (set to Pending) but doesn't update with a final status within 1 hour after having been registered, it will automatically be considered stale and receive the respective status as a result.

The same happens if a Check has started (set to Running) but doesn't update with a final status within 5 minutes.

These automatic thresholds ensure that Checks can't prevent a Deployment from finishing for too long. Instead, they are always expected to update within a reasonable amount of time, otherwise they are ignored through the Stale status so that the Deployment can proceed with finishing.

Regardless of whether the Checks on your Deployment have already finished running, you may always choose to use the "Skip" button for ignoring all of them:

The button for skipping the "Running Checks" step.

If some of the Checks have already started running or completed with a result (regardless of which status, this action will immediately mark the "Running Checks" step as skipped and proceed with finishing the Deployment.

Afterwards, once the Deployment has already proceeded with finishing, the Integrations responsible for all of the Checks will be informed that you have chosen to ignore the results of all the Checks entirely, and may then choose to stop any work on their side and mark their Checks as skipped.

Any Checks that have already provided a result will retain the status they've completed with. This guarantees that it will still be possible to understand which of the Checks have provided which results when inspecting the Deployment again in the future.

In the case that none of the Checks have started running yet, this action will mark the "Running Checks" step as skipped.

Afterwards, the Integrations responsible for all of the Checks will be informed that they won't have to run their Checks, and may then choose to avoid any work on their side and mark the Checks as skipped.

By the time the Deployment approaches the "Running Checks" step, it won't run, and the Deployment will instead proceed with finishing.

If a Check on your Deployment has failed, it is possible to rerun it by clicking "Rerun" on the respective Check, if the Integration has chosen to enable this option:

The button for running a Check again.

After clicking the button, the Integration will notify you of your intent to rerun the Check and perform the same action that it performed for the original run. This will cause the Check's Status to switch to Running immediately ending with the Status that the Integration determines.

Integrations may choose to let you rerun a Check multiple times or not. However, you can always select the "Redeploy" action on the Deployment View if you'd like to force all Checks to be rerun.

It is recommended to invest time into resolving the underlying reasons your Checks are failing instead of rerunning them, as this will prevent any flakiness and save you time.