Performance Optimization
The ski-alerts app works. But "works" and "fast" are different things. This lesson covers the patterns already in your codebase that make it fast, and adds caching headers to your API routes so repeated requests don't hit the weather API unnecessarily.
Outcome
Add Cache-Control headers to API endpoints and understand the parallel fetching patterns already in the app.
Fast Track
- Add
Cache-Controlheaders to the evaluate endpoint - Understand the
Promise.allpattern infetchAllConditions - Know when to cache and when not to
What's Already Fast
The ski-alerts app has two performance patterns built in. Let's look at them.
Parallel Data Fetching
The fetchAllConditions function in src/lib/services/weather.ts fetches weather for all 5 resorts in parallel:
export async function fetchAllConditions(resorts: Resort[]): Promise<ResortConditions[]> {
const results = await Promise.all(
resorts.map(async (resort) => {
const weather = await fetchWeather(resort);
return { resort, weather };
})
);
return results;
}Without Promise.all, fetching 5 resorts sequentially takes ~2.5 seconds (5 x 500ms each). With Promise.all, all 5 requests happen concurrently and the total time is ~500ms, the duration of the slowest single request.
Sequential: resort1 (500ms) → resort2 (500ms) → resort3 (500ms) → ... = ~2.5s
Parallel: resort1 (500ms)
resort2 (500ms) = ~500ms total
resort3 (500ms)
resort4 (500ms)
resort5 (500ms)
ISR on the Dashboard
From lesson 4.1, the dashboard uses ISR with a 5-minute expiration. Most visitors get an instant cached response instead of waiting for weather API calls.
Hands-on exercise 4.3
Let's add caching headers to the evaluate endpoint and review the app's performance patterns:
Requirements:
- Add
Cache-Controlheaders toGETresponses from the evaluate endpoint - Ensure
POSTendpoints are never cached (they have side effects) - Review and understand the parallel fetching pattern in the weather service
Implementation hints:
- Use
Cache-Control: public, s-maxage=60, stale-while-revalidate=300for the evaluate endpoint. This caches for 1 minute on the CDN and serves stale for up to 5 minutes while revalidating s-maxagecontrols CDN/edge caching;max-agecontrols browser cachingstale-while-revalidateserves the cached version while fetching a fresh one in the background- POST requests should never be cached, but you don't need to set headers because browsers and CDNs don't cache POST by default
What NOT to cache:
/api/chat: streaming responses are unique per user/api/workflow: side effects (evaluating and marking alerts)- POST
/api/evaluate: the current implementation is POST, so it's not cached by default
Try It
-
Add a GET handler to the evaluate endpoint for cacheable responses:
You can add a simple GET endpoint that returns conditions for all resorts:
src/routes/api/evaluate/+server.tsexport const GET: RequestHandler = async () => { const conditions = await fetchAllConditions(resorts); return json( { resorts: conditions.map(({ resort, weather }) => ({ id: resort.id, name: resort.name, conditions: weather.conditions, temperature: weather.temperature, snowfall: weather.snowfall24h })), fetchedAt: new Date().toISOString() }, { headers: { 'Cache-Control': 'public, s-maxage=60, stale-while-revalidate=300' } } ); }; -
Deploy and test caching:
$ curl -I https://your-app.vercel.app/api/evaluateResponse headers:
cache-control: public, s-maxage=60, stale-while-revalidate=300 x-vercel-cache: MISS (first request)Second request:
x-vercel-cache: HIT (served from edge cache) -
Measure the difference:
- First request: ~500ms (weather API calls)
- Cached request: ~5ms (edge cache hit)
Commit
git add -A
git commit -m "feat(perf): add caching headers to evaluate endpoint"
git pushDone-When
- GET
/api/evaluatereturns data withCache-Controlheaders - Subsequent requests show
x-vercel-cache: HIT - POST endpoints remain uncached
- You understand the parallel fetching pattern in
fetchAllConditions
Solution
import { json } from '@sveltejs/kit';
import { resorts, getResort } from '$lib/data/resorts';
import { fetchWeather, fetchAllConditions } from '$lib/services/weather';
import { evaluateCondition } from '$lib/services/alerts';
import type { Alert } from '$lib/schemas/alert';
import type { RequestHandler } from './$types';
interface EvaluationResult {
alertId: string;
resortId: string;
resortName: string;
triggered: boolean;
condition: Alert['condition'];
weather: {
temperature: number;
snowfall: number;
conditions: string;
};
}
// Cacheable GET endpoint for current conditions
export const GET: RequestHandler = async () => {
const conditions = await fetchAllConditions(resorts);
return json(
{
resorts: conditions.map(({ resort, weather }) => ({
id: resort.id,
name: resort.name,
conditions: weather.conditions,
temperature: weather.temperature,
snowfall: weather.snowfall24h
})),
fetchedAt: new Date().toISOString()
},
{
headers: {
'Cache-Control':
'public, s-maxage=60, stale-while-revalidate=300'
}
}
);
};
// POST handler for evaluating specific alerts (not cached)
export const POST: RequestHandler = async ({ request }) => {
const requestId = crypto.randomUUID().slice(0, 8);
const startTime = Date.now();
const { alerts } = (await request.json()) as { alerts: Alert[] };
if (!alerts || !Array.isArray(alerts)) {
return json({ error: 'alerts array required' }, { status: 400 });
}
const results: EvaluationResult[] = [];
const alertsByResort = new Map<string, Alert[]>();
for (const alert of alerts) {
const existing = alertsByResort.get(alert.resortId) || [];
existing.push(alert);
alertsByResort.set(alert.resortId, existing);
}
console.log(`[Evaluate] Started`, {
requestId,
alertCount: alerts.length,
resorts: [...alertsByResort.keys()]
});
for (const [resortId, resortAlerts] of alertsByResort) {
const resort = getResort(resortId);
if (!resort) {
console.warn(`[Evaluate] Resort not found: ${resortId}`, { requestId });
continue;
}
try {
const weather = await fetchWeather(resort);
for (const alert of resortAlerts) {
const triggered = evaluateCondition(alert.condition, weather);
results.push({
alertId: alert.id,
resortId: alert.resortId,
resortName: resort.name,
triggered,
condition: alert.condition,
weather: {
temperature: weather.temperature,
snowfall: weather.snowfall24h,
conditions: weather.conditions
}
});
}
} catch (error) {
console.error(`[Evaluate] Weather fetch failed for ${resort.name}:`, {
requestId,
error: String(error)
});
}
}
console.log(`[Evaluate] Completed`, {
requestId,
duration: Date.now() - startTime,
evaluated: results.length,
triggered: results.filter((r) => r.triggered).length
});
return json({
evaluated: results.length,
triggered: results.filter((r) => r.triggered).length,
results
});
};Troubleshooting
Check that you're returning the headers in the second argument to json(). The syntax is json(data, { headers: { 'Cache-Control': '...' } }). If you put the headers object inside the data, they won't be set on the HTTP response.
Make sure you exported a named GET constant, not a default export. SvelteKit expects export const GET: RequestHandler = async () => { ... }. Also verify the file is +server.ts, not +page.server.ts. Page server files don't support custom HTTP method handlers.
Cache-Control Cheat Sheet
| Header | Where it caches | Duration |
|---|---|---|
max-age=60 | Browser | 60 seconds |
s-maxage=60 | CDN/Edge | 60 seconds |
stale-while-revalidate=300 | CDN serves stale while refreshing | Up to 5 minutes |
no-store | Nowhere | Never cached |
private | Browser only | Not on CDN |
For the ski-alerts app:
- Dashboard page: ISR handles caching (lesson 4.1)
- GET /api/evaluate: CDN-cached with
s-maxage=60 - POST endpoints: Not cached (default for POST)
- Streaming endpoints: Not cacheable (unique per request)
Advanced: Vercel Speed Insights
For real user performance monitoring, add Vercel Speed Insights to track Core Web Vitals (LCP, INP, CLS) across your deployed app. See the Speed Insights docs for setup instructions.
The key metrics to watch for ski-alerts:
- LCP (Largest Contentful Paint): How fast the conditions dashboard renders. ISR keeps this fast
- INP (Interaction to Next Paint): How quickly the UI responds to interactions like sending a chat message. Streaming helps here
- CLS (Cumulative Layout Shift): Whether the page shifts as data loads. The fixed layout prevents this
Was this helpful?