When hosting your online business on a platform, it's essential that your app remains functional under high load. While the Vercel platform handles billions of requests each week, we do understand that you may wish to perform a load test. Read on to learn about Vercel's policies for planning and executing load tests.
Load testing is the process of testing if your site can handle a high number of visitors when live, by simulating multiple requests to your site.
The Vercel platform was built to provide teams with peace of mind when launching their websites. As a Serverless and CDN solution, Vercel is highly scalable out-of-the-box and doesn't require human intervention to adjust server instances nor load-balancers. See the Serverless Functions documentation to learn more.
As an Enterprise customer, one of your most important perks is access to our Solutions team. The Solutions team are Next.js experts that can help with architecture choices and identifying possible points of failure. You can reach out to the team before performing any load tests. It is important to certify that your upstream providers can handle the load and that your functions are as performant as possible.
The bottleneck of your application is likely to be one of the APIs, Databases, CMSs, or data storages used in server-side code. So load-testing those resources directly can give you a good picture of the state of your deployment.
PayPal can process around 1000 transactions per second during the holidays. If your Serverless Functions are returning in 200ms, your application is able to handle up to 5000 requests per second in a single region. To calculate this number, take the following equation:
The formula above is representing the maximum amount of requests per second for Serverless Functions in a single Vercel region. Note that the default concurrency limit is 1000 and can be expanded if you reach out to your Customer Support Manager (CSM).
The average response time of your functions is generally responsible for the reduced throughput of your application, and that is why it is vital to test upstream providers so they can reply to calls as fast as possible.
From the equation, if the average response time goes up given a constant concurrency limit, the maximum amount of requests per second will decrease proportionally.
As an example, take a static landing page that calls an authentication API route a single time in the path
/api/auth. The authorization function is a little slow, and returns, on average, in 250ms. If no increases were requested in concurrency limits and the function was deployed to a single region, you would have the following result:
This means the website could support approximately 4000 new visitors each second if those visits are triggering Serverless Functions. If 5000 visitors access the SSR page in the same second, it is possible that a thousand of them will see a failure from
There are a number of ways to expand the capacity of your Serverless Functions:
- Caching: If the data returned by the API or page can be cached and is public, consider adding a cache-control header to the response. Please refer to the documentation on cacheable responses
- Deploying to multiple regions: Enterprise customers can deploy to multiple regions. However, you should always consider using regions as close as possible to your data source. Check our article on How to choose the best region for your deployment. If you still want to deploy to multiple regions, check out the regions configuration documentation
- Increase in concurency limit: If you have an Enterprise plan with Vercel, you can reach out to your Customer Success Manager to request an increase in the Serverless Concurrency Limit. By default, Vercel has a limit of 1000 concurrent functions in a single region per team
- Reduce Serverless Function response time: It is always best to optimize the response time of your Serverless Functions. You should always insert timeouts for DB queries or API calls, and deploy functions geographically close to your data sources
Having read the above, if you still want to proceed with load testing on Vercel you should begin by contacting your CSM, who will coordinate the tests with your team, the Vercel Infrastructure team, and the Solutions team to ensure everyone is aligned.
If you have received approval from either Vercel Support or your CSM to perform a load test you will be asked to provide the following information:
- The start and end time/date
- Estimated maximum number of requests per second
- The target hostnames
- Geographical source (e.g. AWS/GCP region)
- Source IPs (must be < 1000)
- Will the test be distributed geographically, or localised?
Providing the above information is received, it will be reviewed by the infrastructure team who will then approve the request via Vercel Support or request further information if required.
Load tests can cause a spike in usage, both on Serverless Function invocations and third party services that are being used by those functions. This can have the effect of causing unwanted additional usage and billing if not handle correctly.
It is recommended to create a testing environment if possible, or logic to mock third party responses.
Finally, check your contract and warn stakeholders that additional usages will occur depending on the size of the test.
Log Drains must be installed
All requests hitting Vercel will result in entries to the Log Drains provider of your choice. The rows stored in the provider can give you an idea of your Serverless Functions P75, all status codes returned, and error messages if they exist.