Design the Review Schema
The fix for vague AI output isn't a better prompt. It's a contract.
When we hand the model a Zod schema, we're saying "you can return whatever you want, as long as it has these fields, with these types, with these allowed values." The model still has full creative license, just inside a box we built. And the box is what makes the output useful, comparable, and easy to feed into the next thing.
Outcome
Define two Zod schemas in src/analyze.ts: one for an individual Finding, and one for the overall Review that wraps an array of findings.
Fast Track
- Install
zod. - Define
findingSchemawith enums forseverityandcategory. - Define
reviewSchemawith anoverallRiskenum andfindingsarray.
Hands-on exercise
zod is already in the starter's package.json. From scratch, you'd run:
pnpm add zodAdd the schemas to src/analyze.ts. We're keeping analyzeWithPromptV1 around (we'll delete the test caller in 3.4) and adding the schema definitions above it:
import { generateText } from 'ai';
import { z } from 'zod';
export const findingSchema = z.object({
severity: z.enum(['low', 'medium', 'high', 'critical']),
category: z.enum(['security', 'quality', 'performance', 'reliability']),
file: z.string(),
summary: z.string(),
recommendation: z.string()
});
export const reviewSchema = z.object({
overallRisk: z.enum(['low', 'medium', 'high']),
findings: z.array(findingSchema)
});
export type Finding = z.infer<typeof findingSchema>;
export type Review = z.infer<typeof reviewSchema>;
// existing analyzeWithPromptV1 stays below
export async function analyzeWithPromptV1(source: string): Promise<string> {
const result = await generateText({
model: 'openai/codex-5.3',
prompt: `Review this code and tell me what is wrong:\n\n${source}`
});
return result.text;
}A few design choices worth flagging.
Severity has four levels because three felt too coarse and five felt like overthinking it. Critical → high → medium → low maps to how a human would skim the report.
Category has only four values on purpose. Limiting categories prevents the model from inventing new ones ("aesthetic", "philosophical") that don't help anyone triage. Adding a category later is easy; removing one is awkward.
overallRisk has three levels (low/medium/high), one fewer than per-finding severity. A single critical finding in an otherwise clean repo isn't a "critical" overall risk, it's a high one. The asymmetry is intentional.
We're also exporting both schemas and the inferred TypeScript types. Both will be used in the next lesson.
z.infer<typeof schema> only works if the schema is exported. If you get "schema is referenced but never used", make sure both the schema and the type are exported.
Resist. Every additional enum value is another thing the model has to learn when to use. Start narrow, add fields only when you've seen real review output that needed them.
Try It
There's nothing runnable yet (generateObject comes in 3.3), so we'll just typecheck:
pnpm tsc --noEmitExpected: no errors.
If you want to confirm the schemas parse correctly, drop this temporarily into the bottom of src/analyze.ts and run it:
const sample: Review = {
overallRisk: 'medium',
findings: [
{
severity: 'high',
category: 'security',
file: 'src/auth.ts',
summary: 'Hardcoded password',
recommendation: 'Use bcrypt and an env var'
}
]
};
console.log(reviewSchema.parse(sample));Expected output:
{
overallRisk: 'medium',
findings: [
{
severity: 'high',
category: 'security',
file: 'src/auth.ts',
summary: 'Hardcoded password',
recommendation: 'Use bcrypt and an env var'
}
]
}If the parse throws, the schema disagrees with the data. Try changing severity: 'high' to severity: 'extreme' to watch the validation fail.
Delete the temporary sample before moving on.
Commit
git add src/analyze.ts
git commit -m "feat(analyze): define findings and review zod schemas"Done-When
zodis available (already in the starter)findingSchemaandreviewSchemaare defined and exportedFindingandReviewtypes are exported viaz.inferpnpm tsc --noEmitpasses
Solution
import { generateText } from 'ai';
import { z } from 'zod';
export const findingSchema = z.object({
severity: z.enum(['low', 'medium', 'high', 'critical']),
category: z.enum(['security', 'quality', 'performance', 'reliability']),
file: z.string(),
summary: z.string(),
recommendation: z.string()
});
export const reviewSchema = z.object({
overallRisk: z.enum(['low', 'medium', 'high']),
findings: z.array(findingSchema)
});
export type Finding = z.infer<typeof findingSchema>;
export type Review = z.infer<typeof reviewSchema>;
export async function analyzeWithPromptV1(source: string): Promise<string> {
const result = await generateText({
model: 'openai/codex-5.3',
prompt: `Review this code and tell me what is wrong:\n\n${source}`
});
return result.text;
}Was this helpful?