Building Next.js Apps with GraphQL Fragment Colocation and Sanity CMS

Build a content-heavy Next.js app with Sanity using GraphQL fragment colocation. Compose component fragments into one type-safe query for faster ISR, fewer requests, and scalable performance.

Guides/Databases & CMS
7 min read
Last updated November 7, 2025

You're building a content-heavy Next.js app with Sanity CMS and want a scalable architecture that grows with your project. Your components need different pieces of data, so naturally you write clean, modular code where each component declares what it needs.

This works beautifully. But as your application scales, you start thinking about optimization:

  • How can you minimize API calls during ISR regeneration for faster builds?
  • What's the best way to avoid waterfall requests that could slow down content delivery?
  • How do you structure data fetching to handle high-traffic scenarios efficiently?

You might be considering different approaches:

  • Centralizing all data fetching in page components, but this creates tight coupling between pages and components
  • Using multiple targeted queries per component, which is clean but can create sequential request chains

A scalable approach that handles this well: GraphQL Fragment Colocation.

This approach lets you maintain component-level data declarations while composing them into a single, efficient API request - giving you both developer experience and production performance.

By the end of this guide, you'll have a Next.js application that:

  • Fetches all page data in a single API request to Sanity CMS
  • Maintains component-level data colocation so each component declares exactly what it needs
  • Provides full TypeScript safety with compile-time validation
  • Handles ISR reliably without rate limiting issues
  • Scales efficiently as your content and component tree grows

This approach addresses the tension between developer experience and performance that most approaches require you to navigate.

You'll need:

  • Node.js 20+ and npm/yarn/pnpm
  • A Sanity project with GraphQL API enabled
  • Basic familiarity with Next.js App Router and React Server Components

Want to skip the setup? Clone the complete template repository and follow its README to get started immediately. Using a coding agent? The CLAUDE.md helps your agent to get started.

If you don't have a Sanity project yet, the Sanity documentation has a great quickstart guide.

Here's what we're going to do:

  1. Understand the core concept: Exploring the opportunities with fragment colocation
  2. Set up the technical foundation: Next.js 15, gql.tada, URQL, and Sanity
  3. Build real components: PostHeader, and Author with fragment patterns
  4. Make ISR bulletproof: Single-request architecture for reliable background updates
  5. Handle production concerns: Rate limiting, error handling, and performance optimization

Let's start with the fundamental problem. In a typical Next.js app with a headless CMS, you face this choice:

Option A: Component-Level Data Fetching

// Each component fetches its own data
const PostHeader = async () => {
const header = await sanityFetch(POST_HEADER_QUERY);
return <header>{/* render header */}</header>;
};
const Author = async () => {
const author = await sanityFetch(AUTHOR_QUERY);
return <div>{/* render author */}</div>;
};

This looks clean and modular. Each component owns its data requirements. But in production, this creates:

  • Separate API calls to Sanity for a single page
  • Waterfall requests during ISR regeneration
  • Slower page regeneration due to sequential API calls
  • Failed background updates when any single request times out

Option B: Centralized Data Fetching

// All data fetched in one place
export default async function PostPage() {
const [header, author] = await Promise.all([
sanityFetch(POST_HEADER_QUERY),
sanityFetch(AUTHOR_QUERY),
]);
return (
<>
<PostHeader data={header} />
<Author data={author} />
</>
);
}

This is more efficient (2 parallel requests instead of waterfall), but now:

  • Components are tightly coupled to the page's data structure
  • Adding new data requirements means updating multiple files
  • Refactoring components becomes a nightmare
  • TypeScript safety is harder to maintain

GraphQL fragments let you have both: component-level data declarations that compose into a single query.

Here's the same example with fragment colocation:

// Each component declares its data needs as a fragment
const postHeaderFragment = graphql(`
fragment PostHeader on Post {
title
publishedAt
}
`);
export const authorFragment = graphql(`
fragment Author on Author {
_id
name
bio
}
`);
// All fragments compose into a single page query
const GET_POST_BY_SLUG = graphql(`
query GetPostBySlug($slug: String!) {
allPost(where: { slug: { current: { eq: $slug } } }, limit: 1) {
excerpt
contentRaw
author {
...Author
}
...PostHeader
}
}
`,
[authorFragment, postHeaderFragment],
);
const { getClient } = registerUrql(createGraphQLClient);
// Single API request, full type safety, component isolation
export default async function HomePage() {
const { data } = await getClient().query(GET_POST_BY_SLUG, { slug });
return (
<>
<PostHeader data={data} />
<Author data={data.author} />
</>
);
}

What this demonstrates:

  1. Each component defines a fragment describing exactly what data it needs
  2. Fragments compose into the parent query
  3. One API request fetches all the data for the entire page
  4. TypeScript knows exactly what data each component receives
  5. Components stay modular - they only access their fragment's data

The goal is one roundtrip: every page makes exactly one request to your CMS, no matter how many components need data.

Fragments Prevent Over-fetching

Unlike traditional approaches where you might fetch entire objects and only use a few fields, fragments ensure each component declares exactly what it needs. Your IDE will warn you when fragment fields go unused, making over-fetching immediately visible. This component-driven approach means data requirements stay in sync with actual usage.

High-Traffic Considerations

If you're running a high-traffic site (70,000+ pages) that needs to revalidate all content simultaneously (like when updating global components), you might also hit rate limits with multiple API calls per page. The fragment approach reduces multiple API calls to your CMS per page down to 1, which can be the difference between successful and failed regenerations at scale.

In Next.js with ISR, this architecture is important:

Without Fragment Colocation:

  • Page regeneration makes multiple API calls
  • Any failed request breaks the entire update
  • Slower regeneration due to waterfall requests
  • Background updates become unreliable

With Fragment Colocation:

  • Page regeneration makes one API call
  • Single point of failure is easier to handle
  • Predictable API usage and faster regeneration
  • Background updates succeed consistently

Let's build this step by step. We'll create a Next.js app that uses fragment colocation with Sanity CMS.

Start with a fresh Next.js 15 project:

npx create-next-app@latest fragment-colocation-demo --typescript --tailwind --app
cd fragment-colocation-demo

Based on your project setup, install the core GraphQL dependencies:

# Core GraphQL dependencies
npm install gql.tada urql @urql/next graphql
# Sanity CMS dependencies
npm install @portabletext/react
# Development dependencies for Sanity
npm install -D @sanity/cli sanity @sanity/vision

Here's what each package does:

  • gql.tada: Type-safe GraphQL documents with fragment composition
  • @urql/next: GraphQL client with React Server Component support
  • urql: Core GraphQL client functionality
  • @portabletext/react: Render Sanity's Portable Text content

Your project also includes these additional tools:

  • Biome for linting and formatting (instead of ESLint/Prettier)
  • Tailwind CSS v4 for styling
  • Next.js 15.5.0 with Turbopack enabled

Add these scripts to your package.json for easier development:

{
"scripts": {
"dev": "next dev --turbopack",
"sanity:dev": "npx sanity dev",
"sanity:deploy": "npx sanity graphql deploy",
"schema:generate": "npx gql-tada generate schema <https://your-project-id.api.sanity.io/v1/graphql/production/default>",
"build": "next build --turbopack",
"start": "next start",
"lint": "biome check",
"format": "biome format --write"
}
}

Create your Sanity configuration:

// src/sanity/env.ts
export const dataset = process.env.NEXT_PUBLIC_SANITY_DATASET!;
export const projectId = process.env.NEXT_PUBLIC_SANITY_PROJECT_ID!;
if (!dataset || !projectId) {
throw new Error('Missing Sanity environment variables');
}
// src/sanity/client.ts
import { createClient } from 'next-sanity';
import { dataset, projectId } from './env';
export default defineConfig({
projectId,
dataset,
});

Add your environment variables:

# .env.local
NEXT_PUBLIC_SANITY_PROJECT_ID=your_project_id
NEXT_PUBLIC_SANITY_DATASET=production
NEXT_PUBLIC_SANITY_GRAPHQL_URL=https://your_project_id.api.sanity.io/v1/graphql/production/default

gql.tada needs your GraphQL schema to provide type safety. Your project includes a custom schema generation script:

# Generate schema from your Sanity GraphQL endpoint
npm run schema:generate

The generated schema file enables gql.tada to provide full TypeScript safety for your GraphQL operations.

Create a single file that handles both GraphQL configuration and client setup:

// src/lib/graphql.ts
import { cacheExchange, createClient, fetchExchange } from "@urql/core";
import { initGraphQLTada } from "gql.tada";
import type { TypedObject } from "sanity";
import type { introspection } from "./generated/graphql-env";
export function createGraphQLClient() {
const url = process.env.NEXT_PUBLIC_SANITY_GRAPHQL_URL;
if (!url) {
throw new Error("NEXT_PUBLIC_SANITY_GRAPHQL_URL is not configured");
}
return createClient({
url,
exchanges: [cacheExchange, fetchExchange],
});
}
export const graphql = initGraphQLTada<{
introspection: introspection;
scalars: {
DateTime: string;
Date: string;
JSON: TypedObject | TypedObject[];
};
}>();
export type { FragmentOf, ResultOf, VariablesOf } from "gql.tada";
export { readFragment } from "gql.tada";

Template Repository Available

All the code from this guide is available in the template repository. You can clone it and follow along, or use it as a reference while building your own implementation.

What we built:

  • Single-file GraphQL setup combining gql.tada configuration and URQL client
  • Type-safe GraphQL operations with your Sanity schema introspection
  • React Server Component support with proper URQL integration
  • Environment-based configuration with clean error handling
  • Direct client access without wrapper functions for better performance

The foundation is ready. Now let's build some components that use fragment colocation.


Let's create a realistic example: a single blog page with header, content, and author. Each component will define its data requirements as fragments.

First, let's assume you have these document types in Sanity:

// src/lib/schema.ts
const author = defineType({
name: "author",
title: "Author",
type: "document",
fields: [
{
name: "name",
title: "Name",
type: "string",
validation: (rule) => rule.required(),
},
{
name: "slug",
title: "Slug",
type: "slug",
options: {
source: "name",
maxLength: 96,
},
validation: (rule) => rule.required(),
},
{
name: "bio",
title: "Bio",
type: "text",
description: "Short biography of the author",
rows: 3,
},
{
name: "image",
title: "Profile Image",
type: "image",
options: {
hotspot: true,
},
},
],
preview: {
select: {
title: "name",
media: "image",
},
},
});
const post = defineType({
name: "post",
title: "Post",
type: "document",
fields: [
{
name: "title",
title: "Title",
type: "string",
validation: (rule) => rule.required(),
},
{
name: "slug",
title: "Slug",
type: "slug",
options: {
source: "title",
maxLength: 96,
},
validation: (rule) => rule.required(),
},
{
name: "author",
title: "Author",
type: "reference",
to: [{ type: "author" }],
validation: (rule) => rule.required(),
},
{
name: "excerpt",
title: "Excerpt",
type: "text",
rows: 3,
},
{
name: "content",
title: "Content",
type: "array",
of: [{ type: "block" }],
},
{
name: "publishedAt",
title: "Published at",
type: "datetime",
initialValue: () => new Date().toISOString(),
validation: (rule) => rule.required(),
},
],
});

After updating your schema, deploy the schema to Sanity:

npm run sanity:deploy

After deploying your schema, regenerate the GraphQL introspection:

npm run schema:generate

Create a head component with its fragment:

// src/components/posts/post-header.tsx
import Link from "next/link";
import { type FragmentOf, graphql, readFragment } from "@/lib/graphql";
export const postHeaderFragment = graphql(`
fragment PostHeader on Post {
title
publishedAt
}
`);
export function PostHeader(props: {
data: FragmentOf<typeof postHeaderFragment>;
}) {
const header = readFragment(postHeaderFragment, props.data);
const publishedAt = header.publishedAt;
return (
<header className="mb-8">
<Link
href="/"
className="inline-block mb-6 text-blue-600 dark:text-blue-400 hover:underline"
>
Back to posts
</Link>
<h1 className="text-4xl font-bold mb-4">{header.title}</h1>
{publishedAt && (
<time
dateTime={publishedAt}
className="text-gray-600 dark:text-gray-400"
>
{new Date(publishedAt).toLocaleDateString("en-US", {
year: "numeric",
month: "long",
day: "numeric",
})}
</time>
)}
</header>
);
}

Key patterns here:

  • Fragment definition: postHeaderFragment declares exactly what data this component needs
  • Type safety: FragmentOf<typeof postHeaderFragment> ensures the component only receives the data it declared
  • Fragment masking: readFragment() unwraps the data, preventing access to fields not in the fragment
  • Component isolation: This component has no knowledge of how or where its data comes from
// src/components/posts/author.tsx
import { type FragmentOf, graphql, readFragment } from "@/lib/graphql";
export const authorFragment = graphql(`
fragment Author on Author {
_id
name
bio
}
`);
export function Author(props: { data: FragmentOf<typeof authorFragment> }) {
const author = readFragment(authorFragment, props.data);
return (
<div className="border-t border-gray-200 dark:border-gray-700 pt-8 mt-12">
<div className="flex items-start space-x-4">
<div className="w-16 h-16 bg-gray-200 dark:bg-gray-700 rounded-full flex items-center justify-center text-gray-600 dark:text-gray-400 text-xl font-semibold">
{author.name?.[0]?.toUpperCase()}
</div>
<div className="flex-1">
<h3 className="text-lg font-semibold text-gray-900 dark:text-gray-100 mb-1">
{author.name}
</h3>
{author.bio && (
<p className="text-gray-600 dark:text-gray-400 leading-relaxed">
{author.bio}
</p>
)}
</div>
</div>
</div>
);
}

Now all these fragments compose into a single page query:

// src/app/posts/[slug]/page.tsx
import { PortableText } from "@portabletext/react";
import { registerUrql } from "@urql/next/rsc";
import Link from "next/link";
import { notFound } from "next/navigation";
import { Author, authorFragment } from "@/components/posts/author";
import { PostHeader, postHeaderFragment } from "@/components/posts/post-header";
import { createGraphQLClient, graphql } from "@/lib/graphql";
const { getClient } = registerUrql(createGraphQLClient);
interface PostPageProps {
params: Promise<{
slug: string;
}>;
}
const GET_POST_BY_SLUG = graphql(
`
query GetPostBySlug($slug: String!) {
allPost(where: { slug: { current: { eq: $slug } } }, limit: 1) {
excerpt
contentRaw
author {
...Author
}
...PostHeader
}
}
`,
[authorFragment, postHeaderFragment],
);
export default async function PostPage({ params }: PostPageProps) {
const { slug } = await params;
const { data, error } = await getClient().query(GET_POST_BY_SLUG, { slug });
const post = data?.allPost[0];
if (error) {
throw new Error(error.message);
}
if (!post) {
notFound();
}
return (
<article>
<PostHeader data={post} />
{post.excerpt && (
<div className="text-xl text-gray-600 dark:text-gray-400 mb-8 italic">
{post.excerpt}
</div>
)}
{post.contentRaw && (
<div className="prose prose-lg dark:prose-invert max-w-none">
<PortableText value={post.contentRaw} />
</div>
)}
{post.author && <Author data={post.author} />}
</article>
);
}

Notice the fragment composition:

  • GET_POST_BY_SLUG spreads the authorFragment using ...Author and postHeaderFragment using ...PostHeader
  • The fragments array [authorFragment, postHeaderFragment] tells gql.tada about the dependency
  • TypeScript knows exactly what data each component can access

What this demonstrates:

  1. Two components each declared their data needs as fragments
  2. One page query composed all fragments using the spread operator
  3. Single API request to Sanity fetches everything the page needs
  4. Full type safety ensures each component gets exactly the right data
  5. Component isolation is maintained - each component only accesses its fragment

This is fragment colocation in action. You get the modularity of component-level data requirements with the efficiency of a single API request.

Start your development server:

npm run dev

Visit http://localhost:3000. Since this runs server-side, you won't see GraphQL requests in your browser's Network tab. Instead, you should see:

  • Fast page loads with no client-side waterfall requests
  • One GraphQL request happening server-side (visible in your terminal logs)
  • All page data rendered immediately on first load

For deployed apps, you can monitor these server-side requests in Vercel's Observability tab under "External APIs".

If you see multiple requests or TypeScript errors, double-check:

  • Your Sanity schema matches the fragments
  • Environment variables are set correctly
  • GraphQL introspection is up to date

Note: Make sure your Sanity documents exist. Create a Posts and Author document in your Sanity Studio.


Now let's tackle the real-world challenge: making Incremental Static Regeneration work reliably with your fragment-based architecture.

In a traditional setup with multiple API calls, ISR regeneration looks like this:

// This creates problems in production
export default async function Page() {
// 2 separate requests during ISR regeneration
const header = await sanityFetch(POST_HEADER_QUERY);
const author = await sanityFetch(AUTHOR_QUERY);
return <div>{/* render */}</div>;
}
export const revalidate = 3600; // Revalidate every hour

What goes wrong:

  • Multiple failure points: If any single request fails, the entire page regeneration fails
  • Slow regeneration: Sequential requests create delays
  • Unpredictable costs: Request count scales with component complexity
  • At scale: High-traffic sites may hit API rate limits during mass revalidation

With fragment colocation, ISR regeneration becomes predictable:

// Single request = reliable ISR
export default async function Page() {
// One request during ISR regeneration
const result = await getClient().query(homePageQuery);
return <div>{/* render */}</div>;
}
export const revalidate = 3600; // Revalidate every hour

Why this works better:

  • Predictable API usage: Exactly one request per page regeneration
  • Single point of failure: Easier to handle and retry
  • Faster regeneration: No waterfall delays
  • Cost control: Request count is independent of component complexity
  • Scales reliably: Stays within rate limits even at high traffic

Set up basic ISR for your post page:

// src/app/posts/[slug]/page.tsx
// ...
export function generateStaticParams() {
return [];
}
// ...

For faster content updates, add webhook-based revalidation:

import { revalidatePath, revalidateTag } from "next/cache";
import { type NextRequest, NextResponse } from "next/server";
export async function POST(request: NextRequest) {
try {
const body = await request.json();
const { _type, slug } = body;
// Verify webhook secret for security
const secret = request.nextUrl.searchParams.get("secret");
if (secret == null || secret !== process.env.SANITY_REVALIDATE_SECRET) {
return NextResponse.json({ message: "Invalid secret" }, { status: 401 });
}
// Navigation changes revalidate navigation cache tag
if (_type === "navigation") {
revalidateTag("navigation");
return NextResponse.json({
revalidated: true,
scope: "navigation",
type: _type,
});
}
// Footer changes revalidate footer cache tag
if (_type === "footer") {
revalidateTag("footer");
return NextResponse.json({
revalidated: true,
scope: "footer",
type: _type,
});
}
// Post changes revalidate specific routes
if (_type === "post" && slug?.current) {
revalidatePath(`/posts/${slug.current}`);
revalidatePath("/"); // Home page shows posts list
return NextResponse.json({
revalidated: true,
paths: [`/posts/${slug.current}`, "/"],
type: _type,
});
}
// Page changes revalidate specific routes
if (_type === "page" && slug?.current) {
revalidatePath(`/${slug.current}`);
return NextResponse.json({
revalidated: true,
paths: [`/${slug.current}`],
type: _type,
});
}
return NextResponse.json({
message: "No revalidation needed",
type: _type,
});
} catch (err) {
console.error("Revalidation error:", err);
return NextResponse.json(
{
message: "Error revalidating",
error: err instanceof Error ? err.message : "Unknown error",
},
{ status: 500 },
);
}
}

Add the webhook secret to your environment:

# .env.local
SANITY_REVALIDATE_SECRET=your-secret-key-here

In your Sanity Studio, set up a webhook:

  1. Go to ManageAPIWebhooks
  2. Create a new webhook with:
  3. URL: https://your-domain.com/api/revalidate?secret=your-secret-key-here
  4. Trigger on: Create, Update, Delete

Test your ISR setup:

  1. Deploy to production
  2. Trigger a webhook by updating content in Sanity Studio
  3. Monitor the logs for successful revalidation
  4. Check response times - should be consistently fast
  5. Verify content updates appear within seconds

You have built a Next.js application that uses GraphQL Fragment Colocation with Sanity CMS. The complete implementation is available in the template repository. Here's what you accomplished:

Architecture Benefits:

  • Single API request per page eliminates waterfall requests and rate limiting issues
  • Component-level data colocation maintains clean, modular code
  • Full TypeScript safety with compile-time validation of GraphQL operations
  • Reliable ISR regeneration that works consistently in production

Technical Implementation:

  • gql.tada integration for type-safe GraphQL documents
  • URQL with React Server Components for efficient data fetching
  • Fragment composition patterns that scale with your application
  • Webhook-based revalidation for instant content updates

This approach can be applied to other content-heavy Next.js applications that need both good developer experience and optimal performance.

Was this helpful?

supported.