Vercel makes it easy to develop, preview, and ship your frontend and APIs. But what about your data workloads? This guide explains the different possible workloads you can use with Vercel, the different compute options we provide, and best practices for architecting a scalable application with low latency access to your data.
There are many different types of data workloads you might want to consume from your frontend, including but not limited to:
- Configuration Data
- Key / Value
- Blob Storage
- Relational Data
- Content Management
Choosing the correct workload + provider often depends on your demands for latency, durability, and consistency. You might even prefer a specific way of querying data (e.g. MongoDB vs. SQL). There isn’t a one-size-fits-all data solution, which is why Vercel embraces the diversity of choice between different workloads and providers.
Vercel provides three main compute primitives to connect to your data:
Each compute primitive makes different tradeoffs on the runtime, default geographic location, pricing, and limitations. For example, Edge Middleware runs in every Edge Network region by default. Using data workloads that are regional (and not globally replicated) would be a better fit for either Serverless or Edge Functions.
Given your data workload, how can you create a well-architected application using Vercel’s compute primitives?
Ensuring your compute (functions) boot quickly to execute code depends on both the runtime (Node.js or Edge) as well as the size of user code in the function.
These tradeoffs with Edge Functions enable instant cold boots, extreme scalability, and cost-effectiveness at scale. However, not all data workloads can run on this compute runtime, including packages using native Node.js APIs or attempting to read from the file system.
Edge Functions help you automatically adhere to well-architected best practices for connecting to your data workload. If you need to use Serverless Functions (full Node.js runtime), learn more about connection pooling and explore HTTP API data solutions.
Each Vercel compute primitive has different location defaults:
- Serverless Functions: Single region by default, opt-into global
- Edge Functions: Global by default, opt-into single or multi-region
- Edge Middleware: Global by default (only option)
Ensuring compute is close to your data workload is critical to ensure low-latency responses. While having compute run close to your visitors is important, your latency will likely be higher if that requires longer network roundtrips back to your data location.
If your workload can support using Edge Functions, but your data is not geographically distributed or replicated, consider placing your Edge Function in the same region as your data. This will allow you to have the lower latency benefits of the slim runtime, while also preventing long network roundtrips to retrieve data. Learn more about this approach.
We are also building a global distributed, edge-first configuration data store. This gives you near-instant reads of your configuration data from all Vercel compute primitives. Learn more about the private beta of Edge Config.
Vercel’s Edge Network provides caching in every region globally. To ensure the fastest response times, ensure data fetched from your data store is properly cached at the Edge.
Incremental Static Regeneration automates properly setting up caching headers and globally storing generated assets for you. This ensures the highest availability for your traffic and prevents accidental misconfiguration of
You can manually configure
cache-control headers when using Edge Functions and Serverless Functions to cache the response data in every Edge region. Edge Middleware runs before the Edge Network cache layer and cannot use
Explore how to use a variety of data platforms including popular databases, ORMs, and more, with our templates marketplace.
- AWS Aurora
- AWS DynamoDB
- AWS S3
- Azure Blob
- Redis (Upstash)
Use an Edge Function located in the
iad1 (US East) region. This ensures your compute is located close to your data and has low latency.
The runtime for Edge Functions is a subset of the available Node.js APIs. This means Node.js APIs like reading or writing to the file system (
fs) are not available. If a library is dependent on Node.js APIs (and not compatible with Edge Functions), you can either:
- Explore an alternate package that is compatible with edge compute. For example, if you are looking to handle JWTs or encryption/decryption, you can use
[jose](https://github.com/panva/jose)which is compliant with the edge runtime.
- Use a Serverless Function, which uses the full Node.js runtime.
If you need to read data on every request, globally, you need a solution that has the lowest possible latency. Examples might be reading A/B testing configurations or urgent redirects.
We recommend using Vercel Edge Config for ultra-low latency in every Edge region. Most lookups return in 5 ms or less, and 99% of reads will return under 15 ms. This makes it a perfect solution for storing configuration data globally and ensuring reads have low latency.
If you need more than configuration data, like Redis, we recommend exploring Edge compatible solutions like Upstash which can be globally replicated to different regions.
Using a traditional relational database in multiple regions is possible, but potentially costly or difficult to maintain. You might consider keeping your relational database regional and only replication critical data to edge regions, otherwise keeping most workloads running in the single region with regionally located compute.