With the Analytics view, you'll gain insight into many useful metrics that will let you improve the end-user experience of your project through polishing its technical implementation.

Based on all the metrics mentioned below, Vercel calculates the Real Experience Score:

An example of a Real Experience Score.

While other performance measuring tools like Lighthouse estimate your user's experience by running a simulation in a lab, Vercel's Real Experience Score is calculated using real data points collected from the devices of the actual users of your application.

Because of that, it provides a real grade of how users actually experience what you've built.

This enables a continuous stream of performance measurements, over time, integrated into your development workflow. Using our dashboard, you can easily correlate changes in performance to new deployments:

An example of a Real Experience Score over time.

Note: The timestamps in the Analytics view are in local time (not UTC).

Since the Real Experience Score shown in the "Analytics" tab on the dashboard is calculated from real data points collected from the devices of your visitors, it will only provide you with insight into how well your app is performing once you've deployed.

It's essential to collect these data points from real visitors, because it allows you to make decisions based on the actual experience of your users, rather than purely basing them on guesses or artificial tests.

However, to guarantee that your app won't regress in its performance scores, Vercel also optionally features a Virtual Experience Score (VES), which is provided by Integrations like Checkly that are making use of Deployment Checks:

An example of a Virtual Experience Score.

Once you've configured an Integration that supports Performance Checks, they will run every time you create a Deployment and validate whether the user experience has improved or gotten worse.

Just like the Real Experience Score, the Virtual Experience score is calculated from four separate Web Vitals, which are shown below. The only differences between the two scores are the following:

A collection of metrics established by Google in conjunction with the Web Performance Working Group that track the loading speed, responsiveness, and visual stability of your web application.

Measures loading speed, or when the first content of the page has been displayed. For example, when opening a link of a social media profile – the amount of time that passes before the first pieces of information about the user whose profile I'm looking at shows up.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

Measures perceived loading speed, or when all the page's content has been displayed. For example, when I open a link to buy a pair of sneakers — the amount of time that passes before I see my sneakers, their price, and the "Add to Cart" button is LCP.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

Measures visual stability, or how much elements move after being displayed to the user. For example, we've all experienced the frustration of trying to tap a button that moved because an image loaded late — that's CLS.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

Measures page responsiveness, or how long your users wait to see the reaction of their first interaction with the page. For example, the amount of time between me clicking "Add to Cart" and the number of items in my cart incrementing is FID.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

As mentioned by Google, retrieving Web Vitals in a simulated environment requires thinking about them in a slightly different way. Because no real user is sending a request to the application, the tests cannot use the exact same Web Vitals.

Used instead of First Input Delay (FID) when the Virtual Experience Score is determined, because FID requires real user input, which is not available in a simulated environment.

Measures the total amount of time between First Contentful Paint (FCP) and Time to Interactive (TTI) where the main thread was blocked for long enough to prevent input responsiveness.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

For each of the metrics that are collected (such as First Contentful Paint), a metric score between 0 and 100 is calculated by checking into which grade the raw metric value (such as 1.87 seconds, in the case of FCP) falls based on a log-normal distribution derived from real website performance data on HTTP Archive.

In the case of Largest Contentful Paint, for example, HTTP Archive shows about 1,220ms for the top-performing sites, which allows Vercel to map that metric value to a score of 99. Based on this piece of information and the LCP metric value of your own project, Vercel can then calculate your project's LCP score, for example.

Based on the scores of all the individual metrics (which are calculated as described above), Vercel then calculates a weighted average: The Real Experience Score.

The following weightings were chosen by Vercel to provide a ideal representation of the user's perception of performance on a Mobile device (includes the score anchors retrieved from HTTP Archive, as described above):

Metric
Weight
Score of 50
Score of 90
FCP
20%
4s
2.3s
LCP
35%
4s
2.5s
FID
30%
300ms
100ms
CLS
15%
0.25
0.10

For Desktop devices, however, the following ones are considered instead:

Metric
Weight
Score of 50
Score of 90
FCP
20%
1.6s
900ms
LCP
35%
2.4s
1.2s
FID
30%
300ms
100ms
CLS
15%
0.25
0.10

In the case of the Virtual Experience Score, Total Blocking Time (TBT) is used for Desktop instead of First Input Delay (FID):

Metric
Weight
Score of 50
Score of 90
TBT
30%
350ms
150ms

The percentile dropdown allows you to filter your analytics to show data for a certain percentage of users. The default is set to P75 for the best overview of the majority.

  • P75 – The real experience of the majority (75%) of your users, filtering out the slowest 25% outliers.
  • P90 – The real experience of 90% of your users, filtering out the slowest 10% outliers.
  • P95 – The real experience of 95% of your users, filtering out the slowest 5% outliers.
  • P99 – The real experience of 99% of your users, filtering out the slowest 1% outlines.

For example, a P75 score of 1 second means 75% of your users have a First Contentful Paint (FCP) faster than 1 second, while a P99 score of 8 seconds means 99% of your users have a First Contentful Paint (FCP) faster than 8 seconds.

Example of Response Time vs. Percentile

The Real Experience Score, the Virtual Experience Score, and the individual Core Web Vitals (including Other Web Vitals) are colored like so:

  • 0 to 49 (red): Poor
  • 50 to 89 (orange): Needs Improvement
  • 90 to 100 (green): Good

In order to provide your users with an ideal experience, you should strive for a good Real Experience Score and Virtual Experience Score (90 to 100).

However, you are not expected to achieve a "perfect" score of 100, as that's extremely challenging. Taking a score from 99 to 100, for example, requires a much larger metric improvement as going from 90 to 94 (due to diminishing returns).

In general, a better Real Experience Score and/or a better Virtual Experience Score also implies a better end-user experience, so it is recommended to invest time into improving their respective Web Vital Scores.

However, the reason why the score colors don't change directly proportional to their value (for example, if you go from 50 to 80, the color will still be orange), is because any improvement within the color-coding segments will improve the end-user experience, but your site tends to not be ranked higher in search engines.

To ensure that your site gets ranked higher in search results, only jumping into a higher color-coding section (like from orange to green) will be meaningful enough.