Feb. 12th, 2024
How Vercel improves your website’s search engine ranking
Strategies to optimize your Core Web Vitals.
Your web page’s ranking in Google Search is determined by four main factors:
- Relevancy: How close is the topic of your page to the search query?
- Quality: How trustworthy and helpful is your page to others?
- Usability: How smooth is the experience of navigating your page?
- Context: Who is the user and what are they more likely to need?
Relevancy and quality, though crucial, can be a bit of a “black box” to improve. Many strategies yield tangible results, but there are few hard and fast rules.
Context is dependent on your user. For instance, if they search for a sport, they’re likely to get their local (or favorite) team.
Usability, however, is highly measurable. Google uses transparent performance metrics to rank your application by its “page experience.” These metrics are called Core Web Vitals.
The three Core Web Vitals (LCP, CLS, and FID*) often get conflated with other, albeit helpful, metrics that measure your site’s performance for users. Only improvements to the Core Web Vitals themselves will impact your site’s search ranking.
However, all these metrics are closely related to Core Web Vitals. Optimizing one metric often yields positive results in many of the others.
Plus, improvements in each of these metrics have real, measured business impact, with the potential to increase users’ conversion rates on your site. They correlate directly to a better experience for your users.
Below, we provide a brief overview of each metric and show a few places where Vercel (or the 35+ frameworks it supports) can help you optimize your application.
Performance optimization is a deep and nuanced topic, so we’ve linked to other articles where possible to explore specific recommended strategies.
*On March 12, 2024, FID will be replaced by INP as a Core Web Vital.
TTFB represents how long it takes from a user clicking on a link to the response beginning to stream in (very quickly after seeing the white page, but before actual content starts painting).
More technically, TTFB is the sum of redirect time, service worker boot time (if applicable), DNS lookup, TLS handshake, and request—up until the first byte of the response arrives.
Additionally, Partial Prerendering—a recent Next.js optimization that can be adopted into any framework thanks to Vercel’s Build Output API—replaces Edge SSR and does not block TTFB on serverless cold starts, drastically improving the metric.
For instance, within a month of Parachute migrating to Vercel, their load times improved by 60%.
FCP measures the time from the moment the page starts loading to the moment the first piece of content from the Document Object Model (DOM) is rendered on screen. This could be any content from the webpage such as an image, a block of text, or a canvas render.
Since FCP includes the time it takes to unload the previous page and establish the new connection, it can appear significantly different between field and lab testing.
More anecdotally, FCP is when your user first sees something happening on the screen, indicating that your site will be ready to use shortly, which helps to lock in their engagement.
LCP tracks how long it takes for the most noticeable element at the top of your webpage to become visible. It essentially measures how quickly the user feels they can begin using the page.
The “most noticeable element” measured by LCP can be a variety of things—your
h1, a large (but not background) image, a video, etc. To see exactly which element Google is counting as your LCP, use Lighthouse.
In order to load and render the element responsible for your LCP as quickly as possible, it should ideally be static content fetched from the nearest edge location.
This is easier said than done. While homepages are relatively straightforward to keep as static content, product pages or articles often come from an external CMS. This has led to the approach of statically generating website pages at build time (often referred to as static site generation or SSG).
SSG, however, comes with the downside of not being able to change your page’s content without rebuilding the application it’s a part of. As your business scales, this can lead to long build times and the overhead that slower deployment iteration comes with-real-costs-from-slow-builds).
This is why Vercel offers Incremental Static Regeneration (ISR), available simply by writing your normal framework code.
ISR means that, at the page level, you can choose whether you will serve fresh or cached data and how often that cached data should be refreshed. Crucially, this data refresh can happen on-demand or at a set interval, without redeploying your application.
You can optimize this so that your users always get edge-cached data when they navigate to your site, thereby improving LCP, TTI, and TBT.
Next.js takes ISR a step further, leveraging React Suspense and Server Components to give you component-level flexibility of which data is dynamic and which is cached.
As opposed to page-level control, Next.js’s fine-grained flexibility means that you can ensure that your LCP element is both static and streamed before any other piece of your page, leaving smaller dynamic components to load in as they become ready.
While doing the opposite—streaming smaller pieces first—may give a faster FCP, LCP is the Core Web Vital that is measured by Google to rank your application and indicates a better overall user experience.
TTI tracks the time from when the page starts loading to when it can reliably respond to user input quickly.
More precisely, TTI waits for a “quiet window” of 5 seconds where your page has no long tasks (tasks that run for more than 50ms on the main thread) and no more than 2 active network GET requests. Then it looks backward from that window to when the last long task ended. This point is your TTI.
With rendering techniques like server-side rendering (SSR), pages can paint content before users can click buttons on the page. This can lead to frustration, or, in a worst-case scenario, to users thinking the site is broken.
This is why you want to keep the time between FCP and TTI to an absolute minimum (< 800ms). This window is measured by TBT.
React Suspense and Server Components drastically improve TTI and TBT. Thanks to the concurrent nature of Suspense, the hydration of components happens off the main thread.
Even if your application streams in many components, the TTI “quiet window” will still look back to when the last main-thread long task ended. Your TTI (and thereby TBT) improves dramatically when properly utilizing Suspense.
For instance, using this selective hydration strategy, Vercel was able to reduce the TBT of
nextjs.org from 430ms to 80ms.
Generally speaking, websites have to manually delegate which code should run and when. In a traditional React single-page application (SPA), for example, the browser loads all your application code on initial load.
Modern JS frameworks drastically improve on this, automatically dividing code at the route level (code splitting) and even the viewport level (lazy loading).
With these automatic optimizations, you can get to the TTI “quiet window” much faster, while still ensuring the user gets everything they need to interact.
FID is a Core Web Vital that measures the time between the first user interaction on a page (i.e. clicking a link, tapping a button, or using some other JS control) to the time when the browser’s main thread is idle and able to begin processing that event.
There are strategies such as workers and React Concurrency that can help this, which we’ll get into below.
FID will soon be replaced by INP as a Core Web Vital to measure the speed of your application’s interactivity. Let’s break down the differences between the two:
- FID measures only the first input and browser response. INP considers the responsiveness of all user input for the duration of the page session. Then, it averages the times together for the score, ignoring one “worst time” for every 50 interactions.
- FID only measures the delay between input and the browser starting to respond. INP measures the time between the input and the event completing in response—to the presentation of the next frame.
- INP additionally groups events together that occur as part of the same logical user interaction, defining the interaction’s latency as the maximum duration of all its events.
Note that it is possible that a user can visit a page and not interact, in which case no INP score will be calculated. This also happens if the page is accessed by a bot such as a search crawler or headless browser that has not been scripted to interact with the page.
Since the main thread must be idle to process event handlers, React Suspense helps us here, too, by keeping component hydration off the main thread. Additionally, you can look into:
- Throttling or debouncing events—especially ones driven by scrolling—that may be called repeatedly by user input.
- Reducing your DOM size, to avoid having the browser recalculate too many elements on each render.
- Getting SVGs out of your client-side JS bundle. Inline SVGs can be especially troublesome if you have too many (DOM size) or if they end up in your client-side JS bundle (for example, by inlining them in JSX). You may need reference them in an
<img>tag or look into alternate ways of rendering them, such as keeping them in React Server Components.
- Lazy loading images, fonts, or scripts, which Next.js can automatically do for you.
- Code splitting, as mentioned above.
CLS measures layout shifts, which occur any time a visible element jumps in position from one frame to the next.
Layout shifts often occur when loading resources asynchronously or dynamically adding DOM elements to the page above existing content (causing content to be pushed down).
Among other things, the cause of a layout shift could be an image or video whose dimensions are not specified, a font that displays at a size different from its fallback, or a third-party ad or widget that dynamically resizes.
CLS measures the largest burst of layout shifts during a session. Google tracks layout shifts within 1-second windows, calculating a score for each shift based on the affected portion of the viewport and the distance the element moved.
These scores are summed to get a cumulative score for each window. The highest score among these windows represents the actual CLS score. A good CLS score is below 0.1.
Your CLS score can result from significant shifts of large elements (highly noticeable) or many tiny shifts of smaller elements (tough to debug).
Especially on fast connections, layout shifts may happen too quickly for the eye to track. Google does not tell you which elements are shifting and impacting your score.
That’s why we added a layout shift tracker to the Vercel Toolbar, which programmatically detects every layout shift and points you exactly to the problem elements.
Since Vercel’s Preview Deployments are true-to-prod, you get the assurance that no unforeseen network conditions will add layout shifts back in.
For extra convenience, the Vercel Toolbar can also be added to your local dev environment, which allows you to detect layout shifts before they’re ever merged into code.
A layout shift occurring means that the browser has to recalculate the position of all elements in the DOM affected by the shift. This can impact your other web performance metrics, especially if your DOM is large.
Let’s look at how to improve.
Scripts that impact the layout of the page should not run after First Contentful Paint (FCP).
A/B testing, feature flags, or even redirects and internationalization—which must run after user request—can often alter the layout of your page and drastically impact your CLS.
Unfortunately, these types of scripts can be very difficult to render while still meeting Core Web Vitals standards such as LCP and CLS.
- Client-side rendering (CSR) your experimentation will evaluate which version of your app a user will see after the page has loaded. This results in poor UX since your users will have to wait for loaders while the experiment is evaluated and eventually rendered, creating layout shift.
- Server-side rendering (SSR) can slow page response times as experiments are evaluated on demand. Users have to wait for the experiments along a similar timeline as CSR—but stare at a blank page until all of the work is done to build and serve the page.
“With Edge Middleware, we can show the control or experiment version of a page immediately instead of using third-party scripts. This results in better performance and removes the likelihood of flickering/layout shifts.”
This is the best of all worlds: you can have highly dynamic code at build, and Vercel’s Edge Network computes what to statically serve within 15ms at runtime.
Plus, you can manage all your experimentation with Vercel’s Edge Config, without the need to redeploy.
Images should notify the DOM of their
Since even the smallest images take slightly longer to load than text, your site’s image containers should have an explicit
height to prevent elements from being pushed around when the image loads in.
Frameworks like Next.js and SvelteKit offer automatic image optimization to avoid this challenge in the first place by determining the width and height of your image ahead of time to prevent CLS while the image loads in.
Fonts and their fallbacks should match in size.
When using custom fonts, the browser often renders the fallback a split second before the custom font. If your fallback and custom font do not match in size, this can cause elements to shift when the custom font loads.
There are many ways to optimize this behavior, but the built-in font optimization in Next.js is the easiest. Next.js allows you to automatically self-host any font file, which drastically improves load time (rather than requesting the file from Google Fonts, for instance). Additionally, Next.js then provides a fallback font to match the size of your custom font.
Animations on one element should not affect other elements.
For instance, instead of changing
To move elements around, avoid changing the
left properties and use
transform: translate() instead.
Plus, as noted above, CSS transforms can be GPU-accelerated, improving the availability of your CPU’s main thread and thereby optimizing your Core Web Vitals.
Let’s take a look at what we’ve covered:
- After content relevancy and structure, Core Web Vitals—LCP, FID, and CLS—greatly impact your application’s ranking in Google Search.
- FID will be swapped out on March 12, 2024 for INP as the third Core Web Vital.
- Deploying your application on Vercel’s Frontend Cloud vastly and automatically optimizes your application’s TTFB, which in turn improves your FCP and LCP.
- Next.js 14’s Partial Prerendering further optimizes TTFB, FCP, and LCP.
- Vercel’s Incremental Static Regeneration (ISR) can drastically improve your users’ time to see page content. Next.js offers ISR with component-level granularity, as opposed to page-level. ISR directly optimizes FCP, LCP, TTI, and TBT.
- React Suspense, available for use within Next.js, gives you vast flexibility in optimizing your LCP, TTI, TBT, FID, and INP.
- The built-in automatic optimizations of Next.js for images, fonts, and scripts drastically improve LCP, TTI, TBT, FID, INP, and even CLS.
- The Vercel Toolbar enables you to accurately measure hard-to-spot CLS, both in local dev and in your true-to-prod Preview Deployments.
- Vercel’s Edge Middleware unlocks CLS-free A/B testing, feature flags, redirects, internationalization, and more.