With the Analytics view, you'll gain insight into many useful metrics that will let you improve the end-user experience of your project through polishing its technical implementation.

Real Experience Score

Based on all the metrics mentioned below, Vercel calculates the Real Experience Score:

An example of a Real Experience Score.

While other performance measuring tools like Lighthouse estimate your user's experience by running a simulation in a lab, Vercel's Real Experience Score is calculated using real data points collected from the devices of the actual users of your application.

Because of that, it provides a real grade of how users actually experience what you've built.

This enables a continuous stream of performance measurements, over time, integrated into your development workflow. Using our dashboard, you can easily correlate changes in performance to new deployments:

An example of a Real Experience Score over time.

Note: The timestamps in the Analytics view are in local time (not UTC).

Core Web Vitals

Among them, for example, are Web Vitals — a collection of metrics established by Google in conjunction with the Web Performance Working Group that track the loading speed, responsiveness, and visual stability of your web application.

First Contentful Paint (FCP)

Measures loading speed, or when the first content of the page has been displayed. For example, when opening a link of a social media profile – the amount of time that passes before the first pieces of information about the user whose profile I'm looking at shows up.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

Largest Contentful Paint (LCP)

Measures perceived loading speed, or when all the page's content has been displayed. For example, when I open a link to buy a pair of sneakers — the amount of time that passes before I see my sneakers, their price, and the "Add to Cart" button is LCP.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

Cumulative Layout Shift (CLS)

Measures visual stability, or how much elements move after being displayed to the user. For example, we've all experienced the frustration of trying to tap a button that moved because an image loaded late — that's CLS.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

First Input Delay (FID)

Measures page responsiveness, or how long your users wait to see the reaction of their first interaction with the page. For example, the amount of time between me clicking "Add to Cart" and the number of items in my cart incrementing is FID.

To learn more about how this metric is retrieved and how you can improve it, check out the web.dev documentation.

How the Scores Are Determined

For each of the metrics that are collected (such as First Contentful Paint), a metric score between 0 and 100 is calculated by checking into which grade the raw metric value (such as 1.87 seconds, in the case of FCP) falls based on a log-normal distribution derived from real website performance data on HTTP Archive.

In the case of Largest Contentful Paint, for example, HTTP Archive shows about 1,220ms for the top-performing sites, which allows Vercel to map that metric value to a score of 99. Based on this piece of information and the LCP metric value of your own project, Vercel can then calculate your project's LCP score, for example.

Based on the scores of all the individual metrics (which are calculated as described above), Vercel then calculates a weighted average: The Real Experience Score.

The following weightings were chosen by Vercel to provide a ideal representation of the user's perception of performance on a Mobile device (includes the score anchors retrieved from HTTP Archive, as described above):

Metric
Weight
Score of 50
Score of 90
FCP
20%
4s
2.3s
LCP
35%
4s
2.5s
FID
30%
300ms
100ms
CLS
15%
0.25
0.10

For Desktop devices, however, the following ones are considered instead:

Metric
Weight
Score of 50
Score of 90
FCP
20%
1.6s
900ms
LCP
35%
2.4s
1.2s
FID
30%
300ms
100ms
CLS
15%
0.25
0.10

How the Percentages Are Calculated

The percentile dropdown allows you to filter your analytics to show data for a certain percentage of users. We default to P75 for the best overview of the majority.

  • P75 – The real experience of the majority of your users
  • P90 – The real experience of the slowest 10% of your users
  • P95 – The real experience of the slowest 5% of your users
  • P99 – The real experience of the slowest 1% of your users

For example, a P75 score of 1 second means 75% of your users have a First Contentful Paint (FCP) faster than 1 second.

Example of Response Time vs. Percentile

How the Scores Are Color-Coded

The individual metric scores and the Real Experience Score are colored according to the following ranges:

  • 0 to 49 (red): Poor
  • 50 to 89 (orange): Needs Improvement
  • 90 to 100 (green): Good

In order to provide your users with an ideal experience, you should strive for a good Real Experience Score (90 to 100).

However, you are not expected to achieve a "perfect" score of 100, as that's extremely challenging. Taking a score from 99 to 100, for example, requires as much metric improvement as going from 90 to 94 (due to diminishing returns).