Skip to content
Last updated on February 14, 2023
10 min read

Web Vitals Metrics Overview

This guide lists out and explains all the metrics provided by Vercel's Web Vitals feature.

Enabling Vercel Analytics allows you to see the Web Vitals view, to get insights about the score and the individual metrics without modifying your code or leaving the dashboard. You can analyze your site performance parameters without adding new scripts or headers.

An example of the Analytics tab on the project view.

Once the Web Vitals page is initialized, you'll see an interface where the collected data can be filtered and viewed based on the following:

  • Device type for example, Mobile, Tablet, and Desktop
  • Percentile of data for a certain percentage of users; P75, P90, P95, P99
  • Reporting window for data points, from Last Day till Last 28 days (values vary on the account type)

The Web Vitals view also shows a list of all the Page Names and URLs visited by your app users. You can sort these pages based on Data Points, Real Experience Score, and Page Name, where Page Names are the actual pages you've built, and URLs are the paths request by the visitor.

Web Vitals per page and url of your app.

For a Page or URL to appear on the Web Vitals page, a minimum of 10 data points is required in the time range you're looking at. It is important to note that only fresh page loads report Web Vitals. Client-side page transitions will not.

When collecting data points for every visit of your application, the Web Vitals feature will send requests directly from the visitor's browser to Vercel's servers, where the data points are processed and stored.

Suppose you've configured a Content Security Policy in your application. In that case, you need to ensure that the Domain vitals.vercel-insights.com is allowed for outgoing requests (as described in this section, the client-side bundle will report Web Vitals via network requests).

While other performance measuring tools like Lighthouse estimate your user's experience by running a simulation in a lab, Vercel's Real Experience Score is calculated using real data points collected from the devices of the actual users of your application.

Because of that, it provides a real grade of how users actually experience what you've built.

This enables a continuous stream of performance measurements, over time, integrated into your development workflow. Using our dashboard, you can easily correlate changes in performance to new deployments.

An example of a Real Experience Score over time.

Note: The timestamps in the Web Vitals view are in local time (not UTC).

Since the Real Experience Score shown via the Analytics tab is calculated from real data points collected from the devices of your visitors, it will only provide you with insight into how well your app is performing once you've deployed.

It's essential to collect these data points from real visitors, because it allows you to make decisions based on the actual experience of your users, rather than purely basing them on guesses or artificial tests.

However, to guarantee that your app won't regress in its performance scores, Vercel also optionally features a Virtual Experience Score (VES), which is provided by Integrations like Checkly that are making use of Deployment Checks.

Once you've configured an Integration that supports Performance Checks, they will run every time you create a Deployment and validate whether the user experience has improved or gotten worse.

Just like the Real Experience Score, the Virtual Experience score is calculated from four separate Web Vitals, which are shown below. The only differences between the two scores are the following:

A collection of metrics established by Google in conjunction with the Web Performance Working Group that track the loading speed, responsiveness, and visual stability of your web application.

Measures loading speed, or when the first content of the page has been displayed. For example, when opening a link of a social media profile – the amount of time that passes before the first pieces of information about the user whose profile I'm looking at shows up.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

Measures perceived loading speed, or when all the page's content has been displayed. For example, when I open a link to buy a pair of sneakers — the amount of time that passes before I see my sneakers, their price, and the "Add to Cart" button is LCP.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

Measures visual stability, or how much elements move after being displayed to the user. For example, we've all experienced the frustration of trying to tap a button that moved because an image loaded late — that's CLS.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

Measures page responsiveness, or how long your users wait to see the reaction of their first interaction with the page. For example, the amount of time between me clicking "Add to Cart" and the number of items in my cart incrementing is FID.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

Interaction to Next Paint (INP) measures your site's responsiveness to user interactions on the page. The faster your page responds to user input - the better. This experimental metric is Google's effort to develop a better way of measuring responsiveness than First Input Delay (FID).

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

As mentioned by Google, retrieving Web Vitals in a simulated environment requires thinking about them in a slightly different way. Because no real user is sending a request to the application, the tests cannot use the exact same Web Vitals.

Used instead of First Input Delay (FID) when the Virtual Experience Score is determined, because FID requires real user input, which is not available in a simulated environment.

Measures the total amount of time between First Contentful Paint (FCP) and Time to Interactive (TTI) where the main thread was blocked for long enough to prevent input responsiveness.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

A data point is a single recorded measurement for a specific Web Vital metric. Each time a user visits your website, analytics will collect one data point for each core web vital metric, resulting in roughly 2-4 data points per reporting visit.

For each of the metrics that are collected (such as First Contentful Paint), a metric score between 0 and 100 is calculated by checking into which grade the raw metric value (such as 1.87 seconds, in the case of FCP) falls based on a log-normal distribution derived from real website performance data on HTTP Archive.

In the case of Largest Contentful Paint, for example, HTTP Archive shows about 1,220ms for the top-performing sites, which allows Vercel to map that metric value to a score of 99. Based on this piece of information and the LCP metric value of your own project, Vercel can then calculate your project's LCP score, for example.

Based on the scores of all the individual metrics (which are calculated as described above), Vercel then calculates a weighted average: The Real Experience Score.

The following weightings were chosen by Vercel to provide a ideal representation of the user's perception of performance on a Mobile device (includes the score anchors retrieved from HTTP Archive, as described above):

Metric
Weight
Score of 50
Score of 90
FCP
20%
4s
2.3s
LCP
35%
4s
2.5s
FID
30%
300ms
100ms
CLS
15%
0.25
0.10

For Desktop devices, however, the following ones are considered instead:

Metric
Weight
Score of 50
Score of 90
FCP
20%
1.6s
900ms
LCP
35%
2.4s
1.2s
FID
30%
300ms
100ms
CLS
15%
0.25
0.10

In the case of the Virtual Experience Score, Total Blocking Time (TBT) is used for Desktop instead of First Input Delay (FID):

Metric
Weight
Score of 50
Score of 90
TBT
30%
350ms
150ms

The percentile dropdown allows you to filter your analytics to show data for a certain percentage of users. The default is set to P75 for the best overview of the majority.

  • P75: The real experience of the majority (75%) of your users, filtering out the slowest 25% outliers.
  • P90: The real experience of 90% of your users, filtering out the slowest 10% outliers.
  • P95: The real experience of 95% of your users, filtering out the slowest 5% outliers.
  • P99: The real experience of 99% of your users, filtering out the slowest 1% outliers.

For example, a P75 score of 1 second means 75% of your users have a First Contentful Paint (FCP) faster than 1 second, while a P99 score of 8 seconds means 99% of your users have a First Contentful Paint (FCP) faster than 8 seconds.

The Real Experience Score, the Virtual Experience Score, and the individual Core Web Vitals (including Other Web Vitals) are colored like so:

  • 0 to 49 (red): Poor
  • 50 to 89 (orange): Needs Improvement
  • 90 to 100 (green): Good

In order to provide your users with an ideal experience, you should strive for a good Real Experience Score and Virtual Experience Score (90 to 100).

However, you are not expected to achieve a "perfect" score of 100, as that's extremely challenging. Taking a score from 99 to 100, for example, requires a much larger metric improvement as going from 90 to 94 (due to diminishing returns).

In general, a better Real Experience Score and/or a better Virtual Experience Score also implies a better end-user experience, so it is recommended to invest time into improving their respective Web Vital Scores.

However, the reason why the score colors don't change directly proportional to their value (for example, if you go from 50 to 80, the color will still be orange), is because any improvement within the color-coding segments will improve the end-user experience, but your site tends to not be ranked higher in search engines.

To ensure that your site gets ranked higher in search results, only jumping into a higher color-coding section (like from orange to green) will be meaningful enough.