To help you build better experiences for your users, Vercel automatically keeps an eye on various aspects of your web application:
- For example, if an error occurs within your Deployments, it will be collected for you, stored, and displayed to you as Logs, so you can apply the changes necessary for resolving the respective problem.
- Similarly, if a user encounters other problems while interacting with your application, such as slow page loads, or layout shifts, you will be informed about them by Analytics.
With Checks, regressions like the ones mentioned above are prevented from being deployed to Production, so users never encounter them in the first place:
Checks are powered by Integrations, which allow you to connect any third-party service of your choice with Vercel.
To find out whether an Integration for the service you'd like to use already exists, visit the Integration Marketplace and search for the respective service. Note that not all Integrations choose to provide Checks, however.
If you'd like to perform Relability or Performance Tests (including custom End-to-End-Tests) with Checks, Vercel recommends the Checkly Integration. If you want to, you can also add a custom Check yourself.
Whenever you create a new Deployment, Vercel uses its source files to understand which parts of your app have changed, so that unnecessary work in the individual Deployment Steps can be avoided and your Deployment finishes quickly.
Within the Build Step, for example, the platform will rely on your Project's framework to not compile pages that have already been compiled in previous Deployments, and provide it with caches for accomplishing this.
In the case of the "Running Checks" Step, because any arbitrary action may be performed on your Deployment with Checks, it's up to Integrations to decide which parts of your application to act on with every Deployment.
For example, when using the Checkly Integration, no matter which page has changed, all configured pages are acted on with every Deployment:
To help you understand which pages were acted on, Integrations are able to provide their paths as part of the Check's results, as you can see above. This indicator reflects the path in the URL on which the page can be accessed by visitors.
Because the Integration chooses which pages to act on, depending on which Integration you've added and how you've configured it, you might see different amounts of Checks appear for your Deployments.
If one of your Deployment's pages isn't listed in the Checks, please refer to the settings of the third-party service you've integrated with. In the case of the Checkly Integration, for example, you can make additional Checks appear like so.
Afterwards, the Integration will be given the opportunity to register a new Check before the Deployment reaches the "Running Checks" step. Depending on which Integration you're using, this means that Checks might either appear very early, or shortly before the step begins running. It's up to the Integration to decide when to reveal a Check.
Once the Deployment has reached the "Running Checks" step, all the Integrations that registered Checks for the Deployment will be notified again at the same time, and will then be able to perform their particular actions on it.
As the third-party services behind the Integrations then complete their work, they will report results back to the platform. Since different services might take different amounts of time to perform their actions, this means the results will likely arrive at non-identical times.
Only once all the Checks have reported a result, the "Running Checks" step will be considered complete and the Deployment will proceed finishing.
At the time at which the "Running Checks" step starts, only the Automatic Deployment URL will be available, which is therefore also the URL that Integrations use to perform their actions on your Deployment.
Once a Check has finished running, it will provide a summary of how the action it was supposed to take completed.
Depending on the type of action and whether it succeeded or not, this will cause different results to be displayed within the "Running Checks" step:
The most important piece of information that a Check will provide once it has completed running is the Status displayed on the left side of every Check.
It can have one of the following values:
Hasn't started running yet.
Finished running successfully.
Finished running with a failure.
Finished running, but the result is neither positive, nor negative.
Prevented from continuing to run on the third-party service (usually manually).
Deemed to no longer be relevant by the third-party service and stopped.
Registered, but didn't update within the Maximum Duration.
All statuses, except for
Running, are final. Meaning that, once they're set, it's impossible for them to change again in the future on the specific Deployment that they have finished running on.
Stale status is applied, the "Running Checks" step will be marked as failed and the Deployment will not continue finishing, unless the step is skipped:
When registering, however, the Check may have chosen to mark itself as "non-blocking", which means that, regardless of which of these statuses are applied, the Check will never cause the "Running Checks" step to fail.
Once a Check has begun running, it might choose to reveal a link on which you can view more details about its progress and/or its completion results:
As you can see, the link will be accessible from the right side of every Check if it is present, and will point you to an external page outside the Vercel dashboard, provided by the third-party service that you've integrated with.
If a Check is responsible for asserting the performance of your Deployment, it might choose to display a Virtual Experience Score (VES) as part of its results, which will rate your app's user experience on a scale from 1 (worst) to 100 (best).
In the case of the Checkly Integration, for example, the results might look like this:
Integrations will determine your Virtual Experience Score by evaluating your application's performance in a simulated environment. This means that, unlike with Analytics, there are no real users involved yet.
Instead, you can safely assert the performance of your changes, before you make them available to visitors, and then confirm the Real Experience Score with Analytics.
To help you understand why your Virtual Experience Score is good or bad, the results also contain four sub scores generated from Web Vitals, which break down your Deployment's performance into more granular detail.
Each of the metrics represents a particular area of performance in which your application should rank in an excellent manner, to ultimately provide the best user experience possible, before the Deployment is made available to real visitors:
The Web Vitals displayed are the same that Analytics provides, with the only exception that First Input Delay (FID) is replaced with Total Blocking Time (TBT), as recommended by Google, because the former does not apply in a simulated environment.
Your Integration will decide whether to assert your Web Vitals on desktop or mobile. In the case of Checky, for example, it is always a desktop environment.
The Virtual Experience Score is calculated as a weighed average of the Web Vitals, which means that its value is derived from the value of the other four scores.
If the Web Vital Scores happen to change in such a way that the Virtual Experience Score changes in a meaningful manner, the improvement or regression will be reflected below it, including its previous value:
The scores of the individual Web Vitals, on the other hand, are derived from their underlying metric value, which is displayed below them:
Next to the metric value, its evolution compared to your last change is displayed.
One of the following symbols will be used to indicate the type of change (the symbols will be colored to match their respective Web Vital Scores):
The Web Vital's metric value has gotten worse (increased).
The Web Vital's metric value has improved (decreased).
The Web Vital's metric value has remained the same.
Depending on which Integration is used, the values used for comparison might be different ones, but in the case of the Checkly Integration, Preview Deployments are always compared to the last Production Deployment, and Production Deployments are compared to the last Production Deployment.
If a Check was registered (set to
Pending) but doesn't update with a final status within 1 hour after having been registered, it will automatically be considered stale and receive the respective status as a result.
The same happens if a Check has started (set to
Running) but doesn't update with a final status within 5 minutes.
These automatic thresholds ensure that Checks can't prevent a Deployment from finishing for too long. Instead, they are always expected to update within a reasonable amount of time, otherwise they are ignored through the
Stale status so that the Deployment can proceed with finishing.
Regardless of whether the Checks on your Deployment have already finished running, you may always choose to use the "Skip" button for ignoring all of them:
If some of the Checks have already started running or completed with a result (regardless of which status), this action will immediately mark the "Running Checks" step as skipped and proceed with finishing the Deployment.
Afterwards, once the Deployment has already proceeded with finishing, the Integrations responsible for all of the Checks will be informed that you have chosen to ignore the results of all the Checks entirely, and may then choose to stop any work on their side and mark their Checks as skipped.
Any Checks that have already provided a result will retain the status they've completed with. This guarantees that it will still be possible to understand which of the Checks have provided which results when inspecting the Deployment again in the future.
In the case that none of the Checks have started running yet, this action will mark the "Running Checks" step as skipped.
Afterwards, the Integrations responsible for all of the Checks will be informed that they won't have to run their Checks, and may then choose to avoid any work on their side and mark the Checks as skipped.
By the time the Deployment approaches the "Running Checks" step, it won't run, and the Deployment will instead proceed with finishing.
If a Check on your Deployment has failed, it is possible to run it again by clicking "Rerun" on the respective Check, if the Integration has chosen to enable this option:
After clicking the button, the Integration will be notified of your intent to run the Check again and perform the same action again that it performed for the original run. This will cause the Check's Status to switch to
Running immediately, and then end with the Status that the Integration determines.
Integrations may choose to let you rerun a Check multiple times, or not. You can always select the "Redeploy" action on the Deployment View if you'd like to force all of the Checks to be run again, however.
It is generally recommended to invest time into resolving the underlying reasons of why your Checks are failing, instead of rerunning them, as this will prevent any flakyness and save you time in the future.
Since Checks are powered by Integrations, you can build your own Integration in order to register any arbitrary Check for your Deployments.
You may then choose to only use the Integration for your own use cases, or publish it to the Integration Marketplace, so that other Teams on Vercel can start using it.
If the action performed by your Integration is specific to your use case, there is no need to ever publish your Integration — only if you'd like to offer your service to others too.