Vercel and Railway both help developers ship applications to production, but they optimize for different outcomes.
Vercel is a full-stack cloud platform where hosting, CDN, security, and compute are all built in. Push code and the platform derives the best possible infrastructure, from framework-aware builds and global delivery to security and AI. The pieces wire together automatically so teams can focus on product instead of operations.
Railway is a PaaS for running persistent services. Push code or Docker images and Railway runs them as always-on containers with no timeout ceiling. Teams choose and configure the surrounding infrastructure (CDN, WAF, managed databases, observability) from external providers to match their specific requirements.
The decision between Vercel and Railway often comes down to whether you need an integrated platform or building blocks. For web applications that need global delivery, framework-aware caching, and security out of the box, Vercel handles the infrastructure automatically. Railway makes sense when your workload specifically needs always-on servers, Docker containers, multi-service architectures on a visual canvas, or container-level control like SSH access and persistent volumes.
This guide compares Vercel and Railway across compute, delivery, security, developer workflow, and pricing to help you decide which platform fits your project.
- How Vercel and Railway compare
- Vercel platform deep dive
- Railway-specific capabilities
- What's included vs what you assemble
- When to choose Vercel or Railway
- Get started with Vercel
Vercel includes hosting, CDN, security, and compute as one platform. You deploy to Vercel and everything works together automatically. Railway runs your code and provides building blocks. You configure the surrounding infrastructure for CDN, application security, and observability from external providers.
Both platforms run application code, but the compute models serve different workload patterns. Vercel Fluid compute handles full application workloads with auto-scaling and active CPU pricing. Railway runs always-on servers with no timeout ceiling.
| Concern | Vercel | Railway |
|---|---|---|
| Runtimes | Node.js, Python, Go, Ruby, Rust, Bun | 11 via Railpack + any via Dockerfile |
| Memory | Up to 4 GB per instance (scales horizontally to 30,000+ instances) | Up to 32 GB per replica (Pro) |
| Timeout | Up to 800s (13 min) with Fluid compute | No service timeout; 15-min HTTP request max |
| Scaling | Auto to 30,000 concurrent (Hobby/Pro), 100,000+ (Enterprise) | Manual horizontal (up to 42 replicas) + auto vertical |
| Long-lived connections | Optimized for request-response; streaming SSR and waitUntil for background work | Persistent WebSocket/SSE (60s keep-alive timeout) |
| Container support | 37+ frameworks auto-detected; no Dockerfile needed | Dockerfile + private registries |
| Cold starts | Effectively eliminated with pre-warmed instances + bytecode caching | None (always-on) |
| Billing model | Active CPU time (I/O wait excluded) | Per-minute for allocated CPU + RAM |
Vercel Fluid compute keeps pre-warmed instances running on paid plans. Across the platform, 99.37% of all requests see zero cold starts. Bytecode caching (pre-compiling function code so subsequent starts skip the parsing step) reduces startup time for the remainder. Multiple invocations share a single instance with error isolation, meaning one broken request won't crash others. Active CPU pricing bills only during code execution, not during I/O wait. Time spent waiting for database queries, API responses, or AI model inference does not count toward compute costs.
Railway supports three compute primitives: Services (long-running processes), Cron Jobs (scheduled tasks with a minimum 5-minute frequency), and Functions (single-file TypeScript deployed from the canvas). Both platforms support cron jobs. Vercel crons invoke a serverless function via HTTP on schedule, configured in vercel.json. Railway crons spin up a service container on schedule and must exit when done. Services have no timeout ceiling, and individual HTTP requests max at 15 minutes. Persistent WebSocket and SSE connections are supported with a 60-second keep-alive timeout for idle connections. Horizontal scaling supports up to 42 replicas on Pro with automatic vertical scaling. A serverless auto-sleeping toggle puts idle services to sleep to reduce costs.
Vercel serves traffic through a global CDN with 126+ PoPs across 51 countries. Railway serves from 4 bare-metal regions and recommends adding a CDN externally.
Vercel's CDN is framework-aware, reading your routing and rendering configuration at build time so dynamic content benefits from edge caching alongside static assets.
| Concern | Vercel | Railway |
|---|---|---|
| Edge network | 126+ PoPs, 51 countries | 4 bare-metal regions (US West, US East, EU West, Southeast Asia) |
| CDN caching | Framework-aware (ISR, SWR, Data Cache, Edge Cache) | External CDN recommended |
| Invalidation | ~300ms global via framework API (tag-based, up to 128 tags per response) | External CDN manages invalidation |
| Image optimization | Built-in (WebP/AVIF, edge-cached) | External service needed |
| Compression | Automatic Brotli + Gzip | Not documented |
| Edge key-value | Edge Config for low-latency edge reads (P99 within 15ms, often under 1ms) | Not offered |
ISR (Incremental Static Regeneration) caches rendered pages at the edge and regenerates them in the background when data changes, without requiring a full rebuild. A single revalidateTag() or revalidatePath() call propagates globally in ~300ms with no manual API calls required. Stale-while-revalidate serves cached content during background regeneration, and request collapsing groups concurrent requests for the same uncached content into one origin call.
Railway is a compute platform, not an application delivery platform. It serves traffic from 4 bare-metal regions (US West/California, US East/Virginia, EU West/Amsterdam, Southeast Asia/Singapore) with BGP anycast (a network routing method that directs users to the nearest available server) for TLS termination at the edge. Railway runs your code, but the other components a production web application typically needs, like a CDN, image optimization, WAF, and application-level observability, are not included. Teams assemble those from external providers, which means managing separate configurations, billing relationships, and integration points alongside Railway.
Both platforms provide web application security, but they package it differently. Vercel includes DDoS, WAF, and bot protection active on all plans with no configuration required. Railway provides network-level DDoS protection and recommends external services for application-layer security.
| Concern | Vercel | Railway |
|---|---|---|
| DDoS | Network, transport, and application-layer (L3/L4/L7) on all plans; blocked traffic not billed | Network and transport-layer (L3/L4) |
| WAF | Custom rules on all plans; OWASP managed rulesets on Enterprise | External provider recommended |
| Bot protection | Managed rulesets + BotID (invisible AI-powered challenge, free on all plans) | External provider recommended |
| Rate limiting | All plans; @vercel/firewall SDK for programmatic control | External provider recommended |
| TLS fingerprinting | JA3 and JA4 on all plans | Not documented |
| Compliance | SOC 2 Type 2, ISO 27001:2022, PCI DSS v4.0, GDPR; HIPAA BAA (Enterprise) | SOC 2 Type II, SOC 3, GDPR; HIPAA BAA (committed spend) |
Vercel's Firewall processes requests through a defined execution order: DDoS mitigation, then IP blocking, then custom rules, then managed rulesets. WAF changes propagate globally within 300ms with instant rollback. BotID is an invisible CAPTCHA that uses AI to distinguish bots from real users without visible challenges. Basic validation is free on all plans, and Deep Analysis ($1/1K calls on Pro/Enterprise) adds advanced signal analysis for sophisticated bots.
Railway provides network and transport-layer DDoS protection and compliance certifications (SOC 2 Type II, SOC 3, GDPR). For WAF, bot protection, and rate limiting, Railway recommends adding an external provider. HIPAA BAA requires a minimum monthly commitment of $1,000, meaning teams must agree to spend at least that amount per month on Railway services to access HIPAA compliance.
Vercel provides infrastructure for building AI-powered applications. Railway does not include AI application infrastructure.
| Concern | Vercel | Railway |
|---|---|---|
| Model routing | AI Gateway with 20+ providers, automatic failback, BYOK at zero markup | Not offered |
| Application SDK | AI SDK for text generation, streaming, structured data, tool calling | Not offered |
| Code sandboxing | Vercel Sandbox with Firecracker microVMs, millisecond startup | Not offered |
| AI billing advantage | Active CPU pricing excludes model inference wait time | Per-minute billing includes all time |
| Developer AI tools | Vercel Agent with Code Review and Investigation | Not offered |
Active CPU pricing applies to all compute workloads on Vercel. Any request that spends time waiting on external I/O, whether that's a database query, a third-party API call, a file upload to object storage, or an AI model response, is only billed for the fraction of time your code is actually executing. On Railway, per-minute billing charges for the full duration regardless of what the process is doing. AI Gateway routes requests through a single endpoint with configurable fallback chains when a provider is slow or down. AI SDK provides core primitives for text generation, streaming, structured data extraction, and tool calling, with agent orchestration via waitUntil for background processing.
The comparison tables above show where Vercel and Railway overlap. The sections below go deeper into how each capability works on Vercel.
Vercel reads framework patterns and provisions the best possible infrastructure automatically. Your code defines what it needs to run, and each commit becomes an immutable, production-ready environment. Everything is configurable when you need it, but the defaults mean most teams never have to touch infrastructure settings.
| Framework | What Vercel provisions |
|---|---|
| Next.js | Server components, ISR, image optimization, streaming |
| Nuxt | Server-side rendering, auto-imports, Nitro server engine |
| SvelteKit | Server-side rendering with automatic adapter selection |
| Remix | Server-side rendering with nested routing |
| Astro | Static generation with dynamic islands support |
| FastAPI, Flask, Django | Python runtime with ASGI/WSGI support |
| Express, Hono, NestJS | Node.js runtime with automatic routing |
No configuration files or adapters required. Vercel Agent closes the feedback loop with Code Review that scans PRs for bugs, security issues, and performance problems, and Investigation that traces error alerts to root cause across logs, code, and deployments.
Fluid compute is a hybrid serverless model providing serverless flexibility with server-like performance. Pre-warmed instances on paid plans eliminate cold starts for 99%+ of requests, while bytecode caching reduces startup time for the remainder. Multiple invocations share a single instance with error isolation, auto-scaling to 30,000 concurrent on Pro or 100,000+ on Enterprise.
Active CPU pricing bills only during code execution, not I/O wait time. A function that queries a database, calls a third-party API, or waits for an AI model response is only billed for the milliseconds your code runs, not the seconds spent waiting for a response. The waitUntil API allows background work (logging, cache warming, webhook delivery) to continue after the response is sent.
| Resource | Limit |
|---|---|
| Memory | Hobby: 2 GB / 1 vCPU, Pro/Enterprise: up to 4 GB / 2 vCPU |
| Timeout | Up to 800s (Pro/Enterprise) |
| Max payload | 4.5 MB request/response body |
| Bundle size | 250 MB uncompressed (500 MB for Python) |
Build machine tiers range from Standard (4 vCPU, 8 GB) through Enhanced (8 vCPU, 16 GB) to Turbo (30 vCPU, 60 GB). Framework detection provisions compute automatically, so Next.js gets server components, ISR, image optimization, and streaming while SvelteKit and Astro deploy with SSR out of the box.
The Vercel CDN spans 126+ PoPs across 51 countries with 20 compute-capable regions. Caching is framework-derived, so ISR rebuilds pages on demand with ~300ms global invalidation using up to 128 cache tags per response.
Layered cache control headers (Vercel-CDN-Cache-Control, CDN-Cache-Control, Cache-Control) let you set different TTLs for the Vercel CDN, downstream CDNs, and browsers. Stale-while-revalidate and stale-if-error provide cache resilience, and request collapsing groups concurrent requests into one backend call. Built-in image optimization converts to WebP and AVIF formats with edge caching.
ISR works with Next.js, SvelteKit, Nuxt, and Astro.
Every request passes through platform-wide protections before reaching your application, with no configuration required.
DDoS mitigation operates at the network, transport, and application layers (L3, L4, and L7), and only legitimate traffic is metered. The Vercel Firewall executes in a defined order: DDoS mitigation, then IP blocking, then custom rules, then managed rulesets. WAF changes propagate globally within 300ms with instant rollback.
WAF custom rules are available on all plans. Bot Protection Managed Ruleset challenges non-browser traffic, and the AI Bots Managed Ruleset lets you log or deny AI crawlers. BotID is an invisible CAPTCHA that uses AI to distinguish bots from real users without visible challenges. Basic validation is free on all plans, and Deep Analysis ($1/1K calls on Pro/Enterprise) adds advanced signal analysis for sophisticated bots.
Rate limiting is available on all plans with the @vercel/firewall SDK for programmatic control.
Secure Compute provides dedicated VPC, static egress IPs, and VPC peering for workloads that need network isolation.
Vercel maintains compliance certifications including SOC 2 Type 2, ISO 27001:2022, PCI DSS v4.0, GDPR, and EU-U.S. Data Privacy Framework. HIPAA BAA is available on Enterprise.
Building AI applications requires accessing multiple models, handling provider outages, and managing costs. Vercel provides infrastructure across all three areas.
AI Gateway routes requests to 20+ providers (OpenAI, Anthropic, Google, xAI, Groq, and more) through a single endpoint with configurable fallback chains when a provider is slow or down. Bring your own API keys with no added fees. Automatic prompt caching (exact-match) reduces redundant API calls.
AI SDK provides core primitives for text generation, streaming, structured data extraction, and tool calling. Agent orchestration works with waitUntil for background processing after the response is sent.
Vercel Sandbox lets AI agents and user-generated code run safely in isolated microVMs (the same technology AWS Lambda uses) with millisecond startup. Teams use it for code playgrounds, AI-powered builders, and executing agent output in a controlled environment.
Vercel Agent provides AI-powered developer tools. Code Review scans PRs for bugs, security issues, and performance problems and generates validated patches you can merge. Investigation traces error alerts to root cause across logs, code, and deployments.
Every branch push generates a unique preview deployment URL with protection options including password, Vercel Authentication, and Trusted IPs.
Rolling Releases provide gradual traffic shifting with dashboard metrics comparing canary vs current. Instant Rollback reassigns domains without rebuilding.
Collaboration tools extend beyond code:
- Free unlimited Viewer seats on Pro/Enterprise, so designers, PMs, and reviewers don't consume paid licenses
- Vercel Toolbar with Layout Shift Tool, Interaction Timing, Accessibility Audit, and in-browser Feature Flag management
- Comments on preview deployments with issue tracker integration (Linear, Jira, GitHub)
- Edit Mode with 8 CMS integrations for visual content editing
- Draft Mode for previewing unpublished CMS content
- OIDC Federation for credential-free connections to AWS, GCP, and Azure
Vercel focuses on what the user experiences. Speed Insights tracks Core Web Vitals (FCP, LCP, INP, CLS) with element attribution on all plans. Web Analytics is privacy-first with no cookies and a daily-reset hash, with custom events on Pro. Session Tracing via the Vercel Toolbar visualizes request flows in the dashboard.
Log Drains export to external endpoints at $0.50/GB on Pro/Enterprise. OpenTelemetry support includes Datadog, New Relic, and Dash0 integrations. Real-time usage dashboards show function invocations, error rates, and duration metrics.
Runtime log retention on Vercel is Hobby: 1 hour, Pro: 1 day. Observability Plus extends retention to 30 days ($10/mo + $1.20/1M events on Pro). Railway's default log retention is longer without add-ons (Hobby: 7 days, Pro: 30 days, Enterprise: up to 90 days with committed spend).
Vercel pricing includes hosting, CDN, security, image optimization, analytics, and framework infrastructure in a single bill with transparent per-resource costs.
| Plan | Price | Includes |
|---|---|---|
| Hobby | $0/month | 100 GB Fast Data Transfer, 100 GB-Hrs Function Execution, and more. Non-commercial only |
| Pro | $20/month per seat | $20 usage credit included. Unlimited free Viewer seats. 14-day free trial |
| Enterprise | Custom | Contractual SLAs, multi-region compute, dedicated support |
Active CPU pricing excludes time spent waiting on databases, APIs, or AI model responses. Spend Management sends notifications at 50%, 75%, and 100% thresholds. Regional pricing is published for all 20 regions so you can choose regions based on cost and latency tradeoffs.
Railway offers Free ($0/month, $1 usage credit), Hobby ($5/month), and Pro ($20/month per workspace) tiers. None have seat-based pricing, which benefits larger teams where multiple developers need deployment access. Compute is billed per minute at $20/vCPU/month and $10/GB RAM/month, with subscription fees counting toward usage credits. Egress is $0.05/GB.
Railway's per-unit compute pricing is straightforward, but it reflects a narrower scope. Railway provides compute across 4 regions. CDN, WAF, bot protection, image optimization, and application-level observability are not included. Teams running production web applications on Railway typically need external providers for those capabilities, and the cost of assembling and managing them adds to the total spend beyond what Railway bills directly.
Railway offers capabilities that fall outside Vercel's focus areas. If your project relies on any of these, Railway is worth evaluating alongside Vercel.
Railway runs services as persistent processes with no execution time ceiling. Individual HTTP requests max at 15 minutes, but services themselves run indefinitely. This makes Railway well-suited for WebSocket servers, real-time streaming, background workers, and workloads that need hours of uninterrupted execution. Services can allocate up to 32 GB RAM per replica on Pro.
Many workloads that seem to require always-on servers, like API backends, webhook processors, and queue consumers, can run on Vercel's Fluid compute with 800-second timeouts, auto-scaling to 30,000+ concurrent executions, and Active CPU pricing that only bills during code execution. The waitUntil API handles background work after the response is sent. Always-on servers become necessary when workloads genuinely need persistent processes, like long-running WebSocket connections or multi-hour batch jobs.
Railway supports Dockerfile builds, private container registries (Pro plan), and deployment from Docker Hub, GHCR, GitLab, and other registries. Teams get SSH access to running containers via railway ssh, TCP proxy for non-HTTP protocols (databases, game servers, custom protocols), and persistent volumes with configurable mount paths.
Railway's real-time collaborative canvas lets teams visualize and manage all services, databases, workers, and cron jobs in a single project. Services connect over encrypted private networking at {service}.railway.internal with zero configuration. For architectures with multiple interconnected services (frontend + API + workers + databases), the canvas shows how components connect.
Vercel is designed around the framework-defined model where a single project deploys frontend, API routes, and serverless functions together. For separate backend services or databases, Vercel connects through Marketplace integrations with unified billing.
Railway provides one-click database deployment for Postgres, MySQL, Redis, and MongoDB as container-based services. S3-compatible storage buckets are available at $0.015/GB-month with free egress. Encrypted private networking connects databases to application services with zero configuration. Railway does not provide database-specific SLAs, and their documentation recommends external managed databases for mission-critical data.
Vercel provides database access through Marketplace integrations with managed providers (AWS, Neon, Supabase, Upstash, MongoDB Atlas, and others). The difference is managed vs unmanaged: Vercel's marketplace databases come with provider-backed SLAs, automatic backups, and connection credentials auto-injected into environment variables. Railway's databases are self-managed containers where teams handle backups, upgrades, and failover themselves.
This table shows what each platform includes out of the box versus what requires external providers.
| Feature | Vercel | Railway |
|---|---|---|
| Application hosting | Included | Included |
| CDN | Built-in, framework-aware (126+ PoPs) | External CDN recommended |
| Image optimization | Included on all plans | External service needed |
| WAF | Custom rules on all plans; OWASP managed rulesets on Enterprise | External provider recommended |
| DDoS protection | Network, transport, and application-layer (L3/L4/L7) included, blocked traffic not billed | Network and transport-layer (L3/L4) included |
| Bot protection | Managed rulesets + BotID included | External provider recommended |
| Rate limiting | All plans; SDK for programmatic control | External provider recommended |
| Observability | Speed Insights, Web Analytics, Log Drains, OTel | Resource metrics + log explorer. APM requires self-hosted stack |
| AI infrastructure | AI Gateway, AI SDK, Sandbox | Not offered |
| Databases | Managed providers (AWS, Neon, Supabase, Upstash, MongoDB Atlas) via Marketplace with provider SLAs | Unmanaged containers (Postgres, MySQL, Redis, MongoDB) with no database SLA |
| Private networking | Secure Compute with VPC peering (Enterprise) | Encrypted private networking with zero-config service discovery (all plans) |
| Docker support | No (37+ frameworks auto-detected) | Yes (Dockerfile + private registries + SSH) |
| Preview environments | Per-branch URL, zero additional cost | Full environment copy at full resource cost |
| Rollback | Instant Rollback with no rebuild, no retention limit | Limited by image retention (Free: 24h, Hobby: 72h, Pro: 5 days) |
The right platform depends on what you're building and what role the platform plays in your stack.
| If you need... | Choose | Why |
|---|---|---|
| Global web performance | Vercel | 126+ PoPs, ISR with ~300ms invalidation, framework-aware CDN |
| AI-powered applications | Vercel | AI Gateway (20+ providers), AI SDK, Sandbox, and Active CPU pricing excludes model inference wait time |
| Secure-by-default deployment | Vercel | WAF + DDoS (L3/L4/L7) + bot protection + rate limiting on every plan |
| Zero-config framework deployment | Vercel | 37+ frameworks auto-detected, automatic build optimization |
| Git-driven workflow with previews | Vercel | Every branch push generates a preview URL, with Rolling Releases for gradual traffic shifting |
| Always-on backend services | Railway | No timeout ceiling, persistent WebSocket/SSE, Docker support |
| Persistent server workloads | Railway | Always-on containers with Docker, SSH, TCP proxy, persistent volumes |
| Multi-service architectures | Railway | Visual canvas with databases + workers + crons in one project |
| Container-level control | Railway | SSH access, Docker images, TCP proxy, persistent volumes |
Teams building web applications, AI workloads, or projects where hosting, CDN, security, and observability should work together without manual assembly will find Vercel's architecture suited to their needs. Teams needing always-on servers, Docker containers, or multi-service architectures with container-level control will find Railway well-suited to theirs.
If your project needs global delivery, framework-aware caching, built-in security, and AI infrastructure working together without manual assembly, Vercel handles that automatically from the first deploy.
Sign up for Hobby for personal projects or start a 14-day Pro trial for production workloads.