Menu

Debugging Slow Vercel Functions

Last updated February 12, 2026

Use this guide to diagnose and fix slow Vercel Functions. You'll identify which functions are slow, measure where the time is spent, and verify that your optimization works before shipping to production.

This guide requires a linked Vercel project. Run vercel link in your project directory if you haven't already.

Use this block when you already know what you're doing and want the full command sequence. Use the steps below for context and checks.

terminal
# 1. Identify slow functions from production logs
vercel logs --environment production --source serverless --since 1h --json \
  | jq 'select(.statusCode != null) | {path: .path, statusCode: .statusCode}'
 
# 2. Measure endpoint timing (server processing = time in your function)
vercel httpstat /api/slow-endpoint
 
# 3. Check function configuration (memory, region, max duration)
vercel inspect <deployment-url>
 
# IF server processing time is high, check external API latency:
vercel logs --environment production --query "timeout" --since 1h --expand
vercel logs --environment production --query "ECONNREFUSED" --since 1h --expand
 
# IF first request is slow but subsequent requests are fast, check cold starts:
vercel httpstat /api/slow-endpoint    # first request (potentially cold)
vercel httpstat /api/slow-endpoint    # second request (warm)
vercel httpstat /api/slow-endpoint    # third request (warm)
 
# 4. After optimizing, deploy a preview and measure
vercel deploy
vercel httpstat /api/slow-endpoint --deployment <preview-url>
vercel curl /api/slow-endpoint --deployment <preview-url>
 
# 5. Ship to production and monitor
vercel deploy --prod
vercel logs --environment production --source serverless --since 5m

Start by checking your production logs for slow requests. Filter for your function routes and look for high latency:

terminal
vercel logs --environment production --source serverless --since 1h --json \
  | jq 'select(.statusCode != null) | {path: .path, statusCode: .statusCode}'

To see the full log output with expanded messages:

terminal
vercel logs --environment production --source serverless --since 1h --expand

The --source serverless filter limits results to Vercel Functions, excluding static assets and edge requests.

For a visual breakdown of slow routes, check the Vercel Functions tab in Observability on the Vercel Dashboard. Sort by duration to find your slowest routes.

Use vercel httpstat to get a detailed timing breakdown for a specific endpoint. This shows DNS lookup, TCP connection, TLS handshake, server processing, and content transfer times:

terminal
vercel httpstat /api/slow-endpoint

The server processing time is the portion spent inside your function. If this is the largest number, the issue is in your function code or its dependencies (database queries, external API calls).

To test against a specific deployment:

terminal
vercel httpstat /api/slow-endpoint --deployment <deployment-url>

Inspect the current deployment to see what configuration your functions are running with:

terminal
vercel inspect <deployment-url>

Key things to check:

  • Memory: Functions with too little memory get CPU-throttled. If your function does heavy computation, increasing memory from the default 1024 MB can reduce execution time
  • Region: If your function is far from your data source, every database query adds latency. Check that your function region matches your database region
  • Max duration: If your function is hitting the maximum duration limit, it may be getting terminated before completing

See configuring functions for how to adjust these settings.

Slow functions are often caused by slow external API calls rather than slow function code. Check for timeout-related errors:

terminal
vercel logs --environment production --query "timeout" --since 1h --expand
terminal
vercel logs --environment production --query "ECONNREFUSED" --since 1h --expand

If you find timeout or connection errors, the issue is likely with an upstream dependency rather than your function itself.

The External APIs tab in Observability shows latency for all external API calls made by your functions. Sort by P75 latency to find the slowest upstream services.

Cold starts happen when a new function instance needs to be initialized. Look for patterns where the first request after a period of inactivity is slow, but subsequent requests are fast.

Run multiple requests to the same endpoint and compare timing:

terminal
vercel httpstat /api/slow-endpoint
vercel httpstat /api/slow-endpoint
vercel httpstat /api/slow-endpoint

If the first request is significantly slower than the following ones, cold starts are the issue. Common fixes include:

  • Reducing the function bundle size by removing unused dependencies
  • Moving expensive initialization outside the request handler
  • Increasing the memory allocation (which also increases CPU)

Based on what you found, apply the fix. Common optimizations for slow functions include:

  • Adding caching for database queries or external API responses
  • Moving the function region closer to the data source
  • Increasing function memory to reduce CPU throttling
  • Reducing bundle size to speed up cold starts
  • Adding connection pooling for database connections
  • Parallelizing independent async operations with Promise.all

Deploy the optimized code as a preview:

terminal
vercel deploy

Run the same timing analysis against the preview to compare before and after:

terminal
vercel httpstat /api/slow-endpoint --deployment <preview-url>

Also verify that the function still returns correct responses:

terminal
vercel curl /api/slow-endpoint --deployment <preview-url>

Once you've confirmed the improvement, deploy to production:

terminal
vercel deploy --prod

Monitor the production logs after deploying to confirm the latency improvement holds under real traffic:

terminal
vercel logs --environment production --source serverless --since 5m

Was this helpful?

supported.