4 min read
Independent developer Theo Browne recently published comprehensive benchmarks comparing server-side rendering performance between Fluid compute and Cloudflare Workers. The tests measured 100 iterations across Next.js, React, SvelteKit, and other frameworks.
The results showed that for compute-bound tasks, Fluid compute performed 1.2 to 5 times faster than Cloudflare Workers, with more consistent response times.
Link to headingWhat the benchmarks measured
The tests used each platform's typical production configuration: Cloudflare Workers with standard constraints (shared CPU, 128MB RAM) and Fluid compute with performance functions (2 vCPU, 4GB RAM). These configurations reflect different architectural approaches. Cloudflare Workers' constraints enable global edge deployment. Fluid compute runs in-cloud with configurable resources.
Framework performance comparison
Fluid compute averaged 2.55 times faster across these workloads. The variability column shows the range between fastest and slowest response times. For Next.js, Cloudflare responses ranged across 3.171 seconds (from 0.800s to 3.971s) while Fluid responses ranged across 1.098 seconds (from 0.343s to 1.442s).
Individual runs revealed outliers. Roughly one in five Cloudflare requests on Next.js and SvelteKit took 10+ seconds on tasks averaging 1.2 seconds. The same page load might feel instant one moment and frustratingly slow the next.
Users experience individual requests, not averages. For production applications, consistent performance often matters more than fast averages.
Link to headingHow Fluid compute works
Fluid compute is the default runtime on Vercel. The architecture makes specific trade-offs to optimize server-side rendering workloads.
Link to headingIn-region deployment
Server-rendered applications typically make database queries and API calls during request processing. An application making five database queries per request spends far more time waiting for data than rendering components.
Fluid compute deploys in the same cloud region as your infrastructure. When your database runs in AWS US-East-1, Fluid functions can deploy to AWS US-East-1. This co-location reduces network latency for database queries and API calls. For applications making multiple database queries per request, network proximity affects total request time. JSON serialization, schema validation, API calls to external services, and database queries all involve I/O operations where deployment location matters.
Link to headingFull runtime compatibility
Cloudflare Workers run a custom JavaScript runtime that approximates Node.js behavior. You cannot use performance.now()
on Cloudflare. The API is frozen due to Spectre mitigation in the isolate model. When Workers run code in an isolate instead of a VM, you share memory with other workers. Security constraints limit which APIs work.
Fluid compute runs standard Node.js and Python. You can specify Node.js 24 or any version. The entire npm ecosystem works without compatibility layers. Frameworks like Next.js, Remix, and SvelteKit assume they're running in Node.js. When the runtime only approximates Node.js, compatibility issues emerge. Packages may not work as expected. Edge cases break. Debugging is harder when the runtime behaves differently than documented Node.js behavior.
Link to headingConcurrency and resource configuration
Fluid compute processes multiple function invocations concurrently on the same environment. Combined with scale to 1, this approach maintains warm instances and enables efficient resource usage for server-side rendering.
Resources are configurable from 1-4 vCPU and up to 4GB RAM. This flexibility lets you optimize for your workload. Lightweight operations like middleware run efficiently with minimal resources. Compute-intensive operations like server rendering or AI inference can scale up. Both Fluid compute and Cloudflare Workers use CPU-based pricing where you pay for actual compute time.
Link to headingWorkload flexibility
Edge deployment works well for specific use cases. Serving static assets from the edge is fast because assets are cached close to users. Simple routing logic or header manipulation works at the edge because these operations don't need database access or heavy computation.
Fluid compute handles both lightweight and compute-intensive workloads on the same platform. High concurrency runs middleware and simple APIs efficiently. Configurable resources handle server rendering and complex application logic. The platform adapts to your workload instead of forcing your workload to adapt to platform constraints.
Link to headingView the benchmarks
The complete benchmark implementation and raw results are available on GitHub. The repository includes test code, detailed results, and methodology documentation.
Deployment architecture determines performance characteristics. For server rendering workloads that query databases and perform compute operations, in-region deployment with full runtime compatibility provides measurable advantages.