Skip to content

Vercel vs Northflank

A detailed guide to Vercel vs Northflank: Fluid compute, CDN and caching, security defaults, AI infrastructure, GPU compute, BYOC, and when to choose each platform for your project.

Vercel
13 min read
Last updated March 20, 2026

Vercel and Northflank are both cloud platforms for deploying applications, but they start from different assumptions about what you need. Vercel derives infrastructure from your framework code. You push to Git and get hosting, CDN, security, AI infrastructure, and compute configured automatically. Northflank gives you Kubernetes-powered containers, managed databases, and GPU compute with full control over how everything runs.

This guide breaks down where each platform fits so you can decide which one matches what you're building.



On Vercel, infrastructure is invisible. You write application code and the platform handles the rest, from CDN placement to security rules and cache invalidation. On Northflank, infrastructure is the product. You configure containers, attach databases, set up networking, and control exactly how workloads run on Kubernetes. That tradeoff shapes every difference below.

FeatureVercelNorthflank
Platform roleFull-stack application platform (hosting, CDN, compute, security)Kubernetes-powered container platform (containers, databases, GPUs)
Configuration modelFramework-derived (37+ frontend and backend frameworks auto-detected)Container-based (Docker, Buildpacks, any language)
Edge network126+ PoPs in 51 countries, 20+ compute regionsNo built-in CDN (optional external CDN per subdomain), 16 managed cloud regions
Compute at the edgeMiddleware and Edge Functions for low-latency logic at the CDN layerNot offered
Bring Your Own Cloud (BYOC)Not applicable (fully managed platform; CDN, security, and compute handled without cloud accounts)AWS (EKS), GCP (GKE), Azure (AKS), Oracle (OKE), Civo, CoreWeave (CKS), plus Bring Your Own Kubernetes (BYOK) for existing clusters
DeploymentGit push to preview/production URLGit push to container build/deploy pipeline

Vercel auto-detects your framework and configures the build system, runtime, and caching to match. Everything is configurable, with optimal defaults tuned for production from the first deploy. Northflank runs any containerized workload, so teams choose their own build system (Docker, BuildKit, Kaniko, or Buildpacks) and configure ports and deployment settings per service.

Vercel Fluid compute provides a hybrid serverless model with Active CPU pricing that bills only during code execution. Northflank provides always-on containers with per-second billing for all resource time including idle.

FeatureVercelNorthflank
Execution modelFluid compute (hybrid serverless, pre-warmed instances)Always-on containers (Kubernetes pods)
Cold startsNear-zero (pre-warmed instances on paid plans)None (containers always running)
TimeoutUp to 800s with Fluid compute (Pro/Enterprise)No execution timeouts
MemoryUp to 4 GB (Pro/Enterprise)256 MB to 256 GB
vCPUUp to 2 vCPU0.1 to 32 vCPU
RuntimesNode.js, Bun, Python, Rust, Go, Ruby, WASM, EdgeAny containerized runtime (Docker, Buildpacks)
ScalingAuto to 30,000 (Pro) / 100,000+ (Enterprise)Horizontal autoscaling on CPU/memory/RPS/custom Prometheus metrics
Billing modelActive CPU (I/O wait excluded)Per-second for all resources (CPU + memory + storage including idle)
ProtocolsHTTPS, HTTP/1.1, HTTP/2, response streamingHTTP, HTTP/2, WebSockets, gRPC, TCP, UDP
GPUApplication-layer AI (Gateway, SDK); teams add GPU providers for trainingGPU compute (see GPU section for full catalog)

If your workloads are request-response (APIs, page renders, webhook handlers, AI agent calls), Vercel's Fluid compute scales them automatically. Instead of scaling to zero, Fluid compute scales to one, keeping at least one instance warm on paid plans so 99.37% of all requests see zero cold starts. Bytecode caching reduces startup time for the remaining fraction. Active CPU pricing charges only while your code runs. A function that computes for 200ms and waits 2 seconds on a database is billed for 200ms. Auto-scaling reaches 30,000 concurrent executions on Pro or 100,000+ on Enterprise, and the waitUntil API lets background work like logging or cache warming continue after the response is sent.

If your workloads need to stay running (WebSocket servers, queue consumers, background workers, GPU training jobs), Northflank's always-on containers bill per second for CPU and memory, scale up to 32 vCPU and 256 GB per container, and support ARM and x86 with no execution timeouts. Native protocol support includes HTTP/2, WebSockets, gRPC, TCP, and UDP. Autoscaling triggers on CPU, memory, requests per second, or custom Prometheus metrics, with scale-to-zero available for off-hours.

Caching on Vercel is derived from your framework. ISR, stale-while-revalidate, and tag-based invalidation work without manual setup. Northflank does not include a CDN, but teams can add an external CDN per subdomain and configure TTLs manually.

ConcernVercelNorthflank
CDNBuilt-in global edge networkNo built-in CDN (optional external CDN per subdomain)
Caching modelFramework-aware (Incremental Static Regeneration, stale-while-revalidate, Data Cache, Edge Cache)External CDN with manual TTL configuration
Invalidation~300ms global invalidation via framework API (tag-based, 128 tags per response)External CDN purge API
Image optimizationBuilt-in (WebP, AVIF) on all plansNot included
CompressionGzip + BrotliDepends on external CDN configuration

On Vercel, when content changes, a single revalidateTag() or revalidatePath() call in your application code invalidates the cached version across all 126+ edge locations in approximately 300 milliseconds. Teams do not need to manage purge APIs or cache zones separately. Layered cache headers (Vercel-CDN-Cache-ControlCDN-Cache-ControlCache-Control) give you separate TTL control for the Vercel CDN, downstream CDNs, and browsers. Image optimization to WebP and AVIF runs at the edge automatically.

Northflank's optional external CDN supports configurable TTLs (default 1 hour), stale-if-error (default 12 hours), and HTTP/3. Framework-aware caching, tag-based invalidation, and image optimization are not available, so teams managing complex caching strategies configure them at the CDN provider level.

Both platforms provide web application security, but they package it differently. Vercel includes DDoS protection, WAF, and bot protection active on all plans with no configuration required. Northflank includes WAF and DDoS on managed cloud.

FeatureVercelNorthflank
Application security (DDoS + WAF)L3/L4/L7 DDoS on all plans (blocked traffic not billed) + WAF with custom rulesDDoS and WAF included (BYOC users manage their own)
Bot protectionBot protection managed ruleset + BotID (free on all plans; Deep Analysis on Pro/Enterprise)Infrastructure-layer isolation via Kata Containers and gVisor
Rate limitingAll plans (fixed window + token bucket on Enterprise)Configurable per-service via autoscaling thresholds
TLS fingerprintingJA3 and JA4 on all plansNot available
Network isolationSecure Compute (dedicated VPC, static egress IPs, VPC peering)BYOC VPC, mTLS, Tailscale VPN, static egress IPs, multi-project networking
Secrets managementEncrypted environment variables per environmentAES-256 encrypted secrets, project-scoped groups + team-level global secrets
SSOSAML SSO self-serve on Pro (paid add-on)SAML/OIDC SSO (Enterprise; requires contacting support to enable)
ComplianceSOC 2 Type 2. HIPAA BAA available as Pro add-on ($350/month)SOC 2 Type 2 (managed cloud). BYOC enables teams to achieve HIPAA, ISO, PCI, FedRAMP compliance through their underlying cloud provider certifications

Vercel security is active on every request by default. The Vercel Firewall runs DDoS mitigation, IP rules, WAF custom rules, and managed rulesets in sequence, with changes propagating globally in under a second. BotID detects automated traffic using AI without visible challenges to real users, and a separate AI Bots Managed Ruleset gives you control over AI crawlers specifically. All of this is included on every plan.

Northflank security focuses on the infrastructure layer. Bring Your Own Cloud deployments run in customer VPCs with mTLS between containers, Tailscale VPN for private network access, and sandboxed execution via Kata Containers and gVisor for hardware-level workload isolation. Secrets are AES-256 encrypted with project-scoped groups and team-level global secrets. Application-layer protections like bot detection, rate limiting, and TLS fingerprinting are not detailed in Northflank's public documentation.

Vercel and Northflank address AI workloads from different parts of the stack. Vercel provides AI application infrastructure, including model routing, SDKs, and sandboxed code execution. The AI compute layer on Northflank focuses on GPU hardware, autoscaling, and training frameworks.

ConcernVercelNorthflank
Model gatewayAI Gateway (multi-provider routing, fallback chains, BYOK)Not offered (users deploy LiteLLM/vLLM themselves)
AI SDKAI SDK (text generation, streaming, structured data, tool calling, agents)Not offered
Code sandboxVercel Sandbox (isolated Firecracker microVMs, millisecond startup, snapshot storage)Kata Containers / gVisor sandboxed execution
GPU computeApplication-layer AI (Gateway, SDK); teams add GPU providers for trainingGPU compute with auto-scaling from zero and multi-GPU training
AI developer toolsVercel Agent (Code Review, Investigation)Not offered

AI Gateway provides multi-provider routing through a single endpoint, with configurable fallback chains when a provider is slow or down. Bring your own API keys with no added fees. AI SDK provides core primitives for text generation, streaming, structured data extraction, tool calling, and agent orchestration.

The GPU catalog on Northflank includes 18+ GPU types including latest-generation models, with GPU-specific autoscaling including scale-to-zero. Multi-GPU training supports PyTorch, DeepSpeed, FSDP, and Ray. See the GPU section for the full hardware catalog and training capabilities.


The comparison tables above show where Vercel and Northflank overlap. The sections below go deeper into how each capability works on Vercel.

Vercel reads framework patterns and provisions the best possible infrastructure automatically. Your code defines what it needs to run, and each commit becomes an immutable, production-ready environment. Everything is configurable when you need it, but the defaults mean most teams never have to touch infrastructure settings.

FrameworkWhat Vercel provisions
Next.jsServer components, ISR, image optimization, streaming
NuxtServer-side rendering, auto-imports, Nitro server engine
SvelteKitServer-side rendering with automatic adapter selection
RemixServer-side rendering with nested routing
AstroStatic generation with dynamic islands support
FastAPI, Flask, DjangoPython runtime with ASGI/WSGI support
Express, Hono, NestJSNode.js runtime with automatic routing

Caching is also framework-derived, with ISR, layered cache control, request collapsing, and image optimization all provisioned from framework configuration. See the CDN and caching comparison for details.

No configuration files or adapters required. Vercel Agent analyzes production data and generates pull requests that improve stability, security, and performance based on real-world conditions.

Fluid compute combines serverless flexibility with server-like performance. Pre-warmed instances on paid plans reduce cold starts, while bytecode caching reduces startup time for remaining cases. Multiple invocations share a single instance with error isolation, so one broken request does not crash others. Auto-scaling reaches 30,000 concurrent on Pro or 100,000+ on Enterprise. Active CPU pricing bills only during code execution: time spent waiting on database queries, third-party API responses, file uploads, or AI model inference does not count toward compute costs. The waitUntil API allows background work to continue after the response is sent.

ResourceLimit
MemoryHobby: 2 GB / 1 vCPU, Pro/Enterprise: up to 4 GB / 2 vCPU
TimeoutUp to 800s with Fluid compute (Pro/Enterprise)
Max payload4.5 MB request/response body
Bundle size250 MB uncompressed (500 MB for Python)

Build machine tiers range from Standard (4 vCPU, 8 GB) through Enhanced (8 vCPU, 16 GB) to Turbo (30 vCPU, 60 GB).

Building AI applications requires accessing multiple models, handling provider outages, and managing costs. Vercel provides infrastructure across all three areas.

AI Gateway provides multi-provider routing through a single endpoint with configurable fallback chains when a provider is slow or down. Bring your own API keys with no added fees. Prompt caching reduces redundant API calls.

AI SDK provides core primitives for text generation, streaming, structured data extraction, and tool calling. Agent orchestration works with waitUntil for background processing after the response is sent.

Vercel Sandbox lets AI agents and user-generated code run safely in isolated microVMs with millisecond startup. Vercel Agent provides AI-powered developer tools, with Code Review scanning PRs for bugs, security issues, and performance problems and Investigation tracing error alerts to root cause across logs, code, and deployments.

Every branch push generates a unique preview deployment URL with protection options including password, Vercel Authentication, and Trusted IPs.

Rolling Releases provide gradual traffic shifting with dashboard metrics comparing canary vs current. Instant Rollback reassigns domains without rebuilding.

Collaboration tools extend beyond code:

  • Viewer seats available on Pro/Enterprise, so designers, PMs, and reviewers can access the dashboard without consuming developer licenses
  • Vercel Toolbar with Layout Shift Tool, Interaction Timing, Accessibility Audit, and in-browser Feature Flag management
  • Comments on preview deployments with issue tracker integration (Linear, Jira, GitHub)
  • Edit Mode with 8 CMS integrations for visual content editing
  • Draft Mode for previewing unpublished CMS content
  • OIDC Federation for credential-free connections to AWS, GCP, and Azure

Vercel focuses on what the user experiences. Speed Insights tracks Core Web Vitals (FCP, LCP, INP, CLS) with element attribution on all plans. Web Analytics is privacy-first with no cookies and a daily-reset hash, with custom events on Pro. Session Tracing (requires @vercel/otel) via the Vercel Toolbar visualizes request flows in the dashboard.

Log Drains export to external endpoints at $0.50/GB on Pro/Enterprise. OpenTelemetry support includes Dash0 and Braintrust drain integrations. Observability Plus extends log retention to 30 days ($10/mo + $1.20/1M events on Pro).

Vercel pricing includes hosting, CDN, security, image optimization, analytics, and framework infrastructure in a single bill with transparent per-resource costs.

PlanPriceIncludes
Hobby$0/month100 GB Fast Data Transfer, 1M Edge Requests, 100 GB-Hrs Function Execution. Non-commercial only
Pro$20/month per seat$20 usage credit included. Usage-based pricing beyond included amounts. 14-day free trial
EnterpriseCustomMulti-region compute, dedicated support

Active CPU pricing reduces compute costs by billing only during code execution (see compute model comparison for details). Spend Management is available on Pro plans. Regional pricing is published for all 20 regions so you can choose regions based on cost and latency tradeoffs.

Northflank uses per-second resource metering with no seat fees on any plan.

ResourceNorthflank rate
CPU$0.01667/vCPU/hr
Memory$0.00833/GB/hr
Network egress$0.06/GB (ingress free)
SSD storage$0.15/GB/month
Log forwarding$0.20/GB (first 10 GB/month free)
GPU (L4)$0.80/hr
GPU (A100 40GB)$1.42/hr
GPU (H100)$2.74/hr
GPU (H200)$3.14/hr
GPU (B200)$5.87/hr

Northflank's free tier (Developer) is limited to 2 services, 2 jobs, 1 addon, and 2 projects, and is not intended for production use. GPU access requires a prepaid credit purchase starting at $50.

Vercel's per-seat pricing includes CDN, application-layer security, framework-aware caching, image optimization, and analytics. Northflank's per-resource pricing has no seat fee, but CDN and application-layer security are not included and need to be configured separately.


Northflank offers capabilities that fall outside Vercel's focus areas. If your project relies on any of these, Northflank may be worth evaluating alongside Vercel.

Northflank offers 18+ GPU types (B200, H200, H100, A100, L4, T4, V100, MI300X, and more) with autoscaling from zero, multi-GPU training via PyTorch, DeepSpeed, FSDP, and Ray, and spot instances for cost optimization. GPU workloads run on both managed cloud (16 regions) and Bring Your Own Cloud infrastructure.

Vercel approaches AI from the application layer with AI GatewayAI SDKVercel Sandbox, and Active CPU pricing that excludes model inference wait time. Teams that need GPU hardware pair Vercel with a dedicated GPU provider.

Northflank provisions Kubernetes clusters on customer-owned AWS (EKS), GCP (GKE), Azure (AKS), Oracle (OKE), Civo, or CoreWeave (CKS) infrastructure across 600+ regions. BYOK imports existing clusters. Customer VPC deployment lets teams define an application once and deploy it into hundreds of customer environments with namespace isolation, network policies, mTLS, and encrypted secrets.

Vercel is a fully managed platform where CDN, security, and compute work without provisioning cloud accounts. Secure Compute provides dedicated VPC, static egress IPs, and VPC peering for workloads that need network isolation with backend services.


This table highlights how the two platforms differ in what is included out of the box versus what teams configure themselves.

CapabilityVercelNorthflank
CDN + cachingBuilt-in global edge network with framework-aware caching and image optimizationNo built-in CDN; optional external CDN per subdomain with manual TTL config
Application securityWAF, DDoS (L3/L4/L7), BotID, rate limiting included on all plansDDoS and WAF included on managed cloud; bot protection and rate limiting not detailed in public docs
AI infrastructureAI Gateway, AI SDK, Sandbox included (Gateway and SDK also work outside Vercel)Teams deploy their own model routing (LiteLLM/vLLM)
ObservabilitySpeed Insights, Web Analytics, Log Drains, OpenTelemetry on all plansContainer metrics, log forwarding to 11 destinations ($0.20/GB, first 10 GB/month free), Prometheus export
Private networkingSecure Compute (dedicated VPC, static egress IPs, VPC peering)BYOC VPC, mTLS, Tailscale VPN, multi-project networking
Managed databasesMarketplace integrations with auto-injected credentials (Neon, Supabase, Upstash)6 first-party types co-located with compute (PostgreSQL, MongoDB, MySQL, Redis, MinIO, RabbitMQ)
GPU computeApplication-layer AI (Gateway, SDK); teams add GPU providers for training18+ GPU types with auto-scaling from zero and multi-GPU training
RollbackInstant (reassigns domains, no rebuild)Single-click + release flow rollback to specific past runs
Preview environmentsPer-branch URL with protection optionsFull-stack clone with databases, volumes, jobs, secrets

The right platform depends on what you're building and what role the platform plays in your stack.

If you need...ChooseWhy
Hosting, CDN, and security in one platformVercelCDN, WAF, DDoS, bot protection, and rate limiting included on every plan
Framework-aware caching (ISR, tag-based invalidation)Vercel~300ms global invalidation derived from framework code
AI-powered applications (model routing, SDKs)VercelAI Gateway, AI SDK, Sandbox with Active CPU pricing
Git-driven workflow with preview URLsVercelBranch push to preview URL, Rolling Releases for gradual traffic shifting
Next.js with server components and ISRVercelFirst-party App Router, Partial Prerendering, Skew Protection
GPU workloads (training, inference, notebooks)Northflank18+ GPU types, auto-scaling from zero, multi-GPU training
Always-on containers with persistent connectionsNorthflankWebSockets, gRPC, TCP, UDP natively with no execution timeouts
BYOC or customer VPC deploymentNorthflankDeploy on your own AWS, GCP, Azure or into customer VPCs
Full-stack preview environmentsNorthflankClone entire stack including databases, volumes, jobs, and secrets per branch

Vercel fits teams building web applications, AI-powered products, or anything where hosting, CDN, security, and observability should work together from the first deploy. Northflank fits teams running GPU workloads, deploying into customer cloud environments, or needing always-on containers with full protocol and infrastructure control.


Vercel gives you global delivery, framework-aware caching, built-in security, and AI infrastructure that work together from the first deploy.

Sign up for Hobby for personal projects or start a 14-day Pro trial for production workloads.

Was this helpful?

supported.

Read related documentation

No related documentation available.

Explore more guides

No related guides available.