---
title: "Design the Review Schema"
description: "A schema is a contract the model has to keep. In this lesson, we define Zod schemas for a single finding (severity, category, file, summary, recommendation) and for the overall review (risk level + findings array) so the next lesson can demand exactly that shape from the model."
canonical_url: "https://vercel.com/academy/vercel-sandbox/design-the-review-schema"
md_url: "https://vercel.com/academy/vercel-sandbox/design-the-review-schema.md"
docset_id: "vercel-academy"
doc_version: "1.0"
last_updated: "2026-05-17T20:21:35.048Z"
content_type: "lesson"
course: "vercel-sandbox"
course_title: "Vercel Sandbox"
prerequisites:  []
---

<agent-instructions>
Vercel Academy — structured learning, not reference docs.
Lessons are sequenced.
Adapt commands to the human's actual environment (OS, package manager, shell, editor) — detect from project context or ask, don't assume.
The lesson shows one path; if the human's project diverges, adapt concepts to their setup.
Preserve the learning goal over literal steps.
Quizzes are pedagogical — engage, don't spoil.
Quiz answers are included for your reference.
</agent-instructions>

# Design the Review Schema

# Design the Review Schema

The fix for vague AI output isn't a better prompt. It's a contract.

When we hand the model a Zod schema, we're saying "you can return whatever you want, as long as it has these fields, with these types, with these allowed values." The model still has full creative license, just inside a box we built. And the box is what makes the output useful, comparable, and easy to feed into the next thing.

## Outcome

Define two Zod schemas in `src/analyze.ts`: one for an individual `Finding`, and one for the overall `Review` that wraps an array of findings.

## Fast Track

1. Install `zod`.
2. Define `findingSchema` with enums for `severity` and `category`.
3. Define `reviewSchema` with an `overallRisk` enum and `findings` array.

## Hands-on exercise

`zod` is already in the starter's `package.json`. From scratch, you'd run:

```bash
pnpm add zod
```

Add the schemas to `src/analyze.ts`. We're keeping `analyzeWithPromptV1` around (we'll delete the test caller in 3.4) and adding the schema definitions above it:

```ts
import { generateText } from 'ai';
import { z } from 'zod';

export const findingSchema = z.object({
  severity: z.enum(['low', 'medium', 'high', 'critical']),
  category: z.enum(['security', 'quality', 'performance', 'reliability']),
  file: z.string(),
  summary: z.string(),
  recommendation: z.string()
});

export const reviewSchema = z.object({
  overallRisk: z.enum(['low', 'medium', 'high']),
  findings: z.array(findingSchema)
});

export type Finding = z.infer<typeof findingSchema>;
export type Review = z.infer<typeof reviewSchema>;

// existing analyzeWithPromptV1 stays below
export async function analyzeWithPromptV1(source: string): Promise<string> {
  const result = await generateText({
    model: 'openai/codex-5.3',
    prompt: `Review this code and tell me what is wrong:\n\n${source}`
  });

  return result.text;
}
```

A few design choices worth flagging.

Severity has four levels because three felt too coarse and five felt like overthinking it. Critical → high → medium → low maps to how a human would skim the report.

Category has only four values on purpose. Limiting categories prevents the model from inventing new ones ("aesthetic", "philosophical") that don't help anyone triage. Adding a category later is easy; removing one is awkward.

`overallRisk` has three levels (low/medium/high), one fewer than per-finding severity. A single critical finding in an otherwise clean repo isn't a "critical" overall risk, it's a high one. The asymmetry is intentional.

We're also exporting both schemas and the inferred TypeScript types. Both will be used in the next lesson.

\*\*Warning: Troubleshooting: TS errors on z.infer\*\*

`z.infer<typeof schema>` only works if the schema is exported. If you get "schema is referenced but never used", make sure both the schema and the type are exported.

\*\*Note: Troubleshooting: tempted to add more enums\*\*

Resist. Every additional enum value is another thing the model has to learn when to use. Start narrow, add fields only when you've seen real review output that needed them.

## Try It

There's nothing runnable yet (`generateObject` comes in 3.3), so we'll just typecheck:

```bash
pnpm tsc --noEmit
```

Expected: no errors.

If you want to confirm the schemas parse correctly, drop this temporarily into the bottom of `src/analyze.ts` and run it:

```ts
const sample: Review = {
  overallRisk: 'medium',
  findings: [
    {
      severity: 'high',
      category: 'security',
      file: 'src/auth.ts',
      summary: 'Hardcoded password',
      recommendation: 'Use bcrypt and an env var'
    }
  ]
};

console.log(reviewSchema.parse(sample));
```

Expected output:

```txt
{
  overallRisk: 'medium',
  findings: [
    {
      severity: 'high',
      category: 'security',
      file: 'src/auth.ts',
      summary: 'Hardcoded password',
      recommendation: 'Use bcrypt and an env var'
    }
  ]
}
```

If the parse throws, the schema disagrees with the data. Try changing `severity: 'high'` to `severity: 'extreme'` to watch the validation fail.

Delete the temporary sample before moving on.

## Commit

```bash
git add src/analyze.ts
git commit -m "feat(analyze): define findings and review zod schemas"
```

## Done-When

- [ ] `zod` is available (already in the starter)
- [ ] `findingSchema` and `reviewSchema` are defined and exported
- [ ] `Finding` and `Review` types are exported via `z.infer`
- [ ] `pnpm tsc --noEmit` passes

## Solution

```ts title="src/analyze.ts"
import { generateText } from 'ai';
import { z } from 'zod';

export const findingSchema = z.object({
  severity: z.enum(['low', 'medium', 'high', 'critical']),
  category: z.enum(['security', 'quality', 'performance', 'reliability']),
  file: z.string(),
  summary: z.string(),
  recommendation: z.string()
});

export const reviewSchema = z.object({
  overallRisk: z.enum(['low', 'medium', 'high']),
  findings: z.array(findingSchema)
});

export type Finding = z.infer<typeof findingSchema>;
export type Review = z.infer<typeof reviewSchema>;

export async function analyzeWithPromptV1(source: string): Promise<string> {
  const result = await generateText({
    model: 'openai/codex-5.3',
    prompt: `Review this code and tell me what is wrong:\n\n${source}`
  });

  return result.text;
}
```


---

[Full course index](/academy/llms.txt) · [Sitemap](/academy/sitemap.md)
