Vercel Logo

Measure Before You Optimize

You can't optimize what you don't measure. Or you can, but you'll waste a week speeding up the thing that wasn't slow.

The pipeline works end-to-end now: validate → boot Sandbox → clone → collect → install → test → analyze → report. Some of those stages are fast and some are slow, and right now we have no idea which is which. Before we add snapshots in the next lesson, let's get the baseline.

Outcome

Add timing measurements around each stage in src/cli.ts (or the lifecycle, your call), and print a per-stage breakdown plus total runtime at the end of every review.

Fast Track

  1. Define a time(label, fn) helper that wraps an async function and logs its duration.
  2. Wrap each pipeline stage in the CLI with time.
  3. Track total runtime separately and print a summary at the end.

Hands-on exercise

Open src/cli.ts and add the helper plus the timing wrappers. The cleanest spot for the helper is at the top of the file:

import { Command } from 'commander';
import { runSandboxLifecycle } from './sandbox-lifecycle';
import { analyzeRepository } from './analyze';
import { parseTestFailures } from './test-runner';
 
function isValidGitHubRepoUrl(input: string): boolean {
  return /^https:\/\/github\.com\/[\w.-]+\/[\w.-]+\/?$/.test(input);
}
 
async function time<T>(label: string, fn: () => Promise<T>): Promise<T> {
  const startedAt = Date.now();
  try {
    return await fn();
  } finally {
    const durationMs = Date.now() - startedAt;
    console.log(`  ⏱  ${label}: ${durationMs}ms`);
  }
}
 
const program = new Command();
 
program
  .name('repo-review')
  .description('Clone and review a GitHub repository in a Sandbox')
  .version('0.1.0');
 
program
  .command('review <repoUrl>')
  .description('Run a Sandbox review against a GitHub repository URL')
  .action(async (repoUrl: string) => {
    if (!isValidGitHubRepoUrl(repoUrl)) {
      console.error(`Invalid GitHub repository URL: ${repoUrl}`);
      console.error('Expected format: https://github.com/<owner>/<repo>');
      process.exitCode = 2;
      return;
    }
 
    console.log(`Reviewing ${repoUrl}...`);
    const totalStart = Date.now();
 
    try {
      const lifecycle = await time('sandbox lifecycle', () => runSandboxLifecycle(repoUrl));
      const aiReview = lifecycle.files.length === 0
        ? { overallRisk: 'low' as const, findings: [] }
        : await time('ai analysis', () => analyzeRepository(lifecycle.files));
 
      const testFindings = parseTestFailures(
        `${lifecycle.testResult.stdout}\n${lifecycle.testResult.stderr}`
      );
 
      const combined = {
        overallRisk: testFindings.length > 0 ? 'high' as const : aiReview.overallRisk,
        aiFindings: aiReview.findings,
        testFindings
      };
 
      console.log(`\nOverall risk: ${combined.overallRisk}`);
      console.log(`AI findings: ${combined.aiFindings.length}`);
      for (const finding of combined.aiFindings) {
        console.log(`  [${finding.severity}] ${finding.summary} (${finding.file})`);
      }
      console.log(`Test findings: ${combined.testFindings.length}`);
      for (const finding of combined.testFindings) {
        console.log(`  [${finding.severity}] ${finding.details}`);
      }
 
      console.log(`\nTotal: ${Date.now() - totalStart}ms`);
    } catch (error) {
      console.error('Review failed:', error instanceof Error ? error.message : error);
      process.exitCode = 1;
    }
  });
 
program.parse();

The time helper uses try/finally so we log the duration even when the wrapped function throws. Knowing "ai analysis failed after 18 seconds" is way more useful than just knowing it failed.

Right now the per-stage breakdown is coarse (just lifecycle + analysis). The lifecycle itself does several big things internally (boot, clone, install, test). If you want sub-stage timing inside the lifecycle, you can add the same time helper there. For now, the rough split is enough to see where the seconds go.

Troubleshooting: timing shows 0ms

If a stage prints 0ms, something synchronous returned before any real work happened. Check that the function actually awaits something inside.

Troubleshooting: Date.now is good enough

For benchmarking at the second/minute scale, Date.now() is fine. performance.now() gets you fractional milliseconds, which we don't need here. Don't overthink it.

Try It

pnpm review https://github.com/vercel/examples

Expected output:

Reviewing https://github.com/vercel/examples...
  ⏱  sandbox lifecycle: 24180ms
  ⏱  ai analysis: 7240ms

Overall risk: low
AI findings: 2
  ...
Test findings: 0

Total: 31420ms

Two takeaways from a real run.

First, the lifecycle dominates. Most of the 24 seconds is pnpm install, which we'll attack with snapshots next lesson.

Second, the AI analysis is fixed overhead. Smaller prompts will be faster, larger ones slower, but it doesn't scale with repo size the way install does.

Knowing this, the snapshot work in 5.2 is justified. Without these numbers it would have been a guess.

Commit

git add src/cli.ts
git commit -m "feat(cli): add per-stage timing and total runtime"

Done-When

  • time(label, fn) helper logs duration even when the wrapped function throws
  • Each major stage is wrapped in time
  • Total runtime prints at the end of every successful review
  • Timing also prints when a stage fails (so we know where time was spent before the error)

Solution

src/cli.ts (helper)
async function time<T>(label: string, fn: () => Promise<T>): Promise<T> {
  const startedAt = Date.now();
  try {
    return await fn();
  } finally {
    const durationMs = Date.now() - startedAt;
    console.log(`  ⏱  ${label}: ${durationMs}ms`);
  }
}

Was this helpful?

supported.