Generate Structured Reviews
Same model. Same code under review. New output shape.
In 3.1 we got back a paragraph. In this lesson we get back a typed object with severity levels, file paths, and concrete recommendations. The only thing that changed is which function we called and what we handed it.
Outcome
Add analyzeRepository(files) to src/analyze.ts. It uses generateObject with the reviewSchema from 3.2, takes an array of { path, content } files, and returns a typed Review.
Fast Track
- Import
generateObjectfromai. - Write
analyzeRepository(files)that builds a prompt from the files and callsgenerateObject({ schema, prompt, model }). - Return
result.object(typed asReview).
Hands-on exercise
Open src/analyze.ts and add the new function. The schemas and analyzeWithPromptV1 stay where they are:
import { generateObject, generateText } from 'ai';
import { z } from 'zod';
export const findingSchema = z.object({
severity: z.enum(['low', 'medium', 'high', 'critical']),
category: z.enum(['security', 'quality', 'performance', 'reliability']),
file: z.string(),
summary: z.string(),
recommendation: z.string()
});
export const reviewSchema = z.object({
overallRisk: z.enum(['low', 'medium', 'high']),
findings: z.array(findingSchema)
});
export type Finding = z.infer<typeof findingSchema>;
export type Review = z.infer<typeof reviewSchema>;
export async function analyzeWithPromptV1(source: string): Promise<string> {
const result = await generateText({
model: 'openai/codex-5.3',
prompt: `Review this code and tell me what is wrong:\n\n${source}`
});
return result.text;
}
export async function analyzeRepository(
files: Array<{ path: string; content: string }>
): Promise<Review> {
const prompt = [
'You are a senior application security and code quality reviewer.',
'Return only findings that are directly supported by the provided source.',
'Prefer precise, actionable recommendations over generic advice.',
'If there are no findings, return an empty findings array.',
'',
...files.map((f) => `FILE: ${f.path}\n${f.content}`)
].join('\n');
const result = await generateObject({
model: 'openai/codex-5.3',
schema: reviewSchema,
prompt
});
return result.object;
}Two things to point out.
The prompt is doing work the schema can't. The schema enforces the shape of the output, but it can't tell the model "be specific" or "don't make things up." That's what the system-style preamble is for. We're telling the model who it is, what to return, and what not to return. Those three sentences raise the quality of findings by a lot.
The file format inside the prompt (FILE: path\ncontent\n) is intentionally plain. We're not using JSON or YAML or anything fancy. Models are good at reading "FILE: x" headers because they look like a lot of the training data.
To verify, replace the temporary main() test caller at the bottom of src/analyze.ts with one that calls the new function:
async function main() {
const files = [
{
path: 'src/auth.ts',
content: `
export function login(user: string, password: string) {
if (password === 'admin') return true;
return false;
}
`
}
];
const review = await analyzeRepository(files);
console.log(JSON.stringify(review, null, 2));
}
main();If generateObject throws a schema validation error, the model returned something that didn't match the shape. Usually this means tightening the prompt ("only use the listed categories") or loosening the schema (allow more enum values).
An empty array is a valid response. If the code you're reviewing actually has no issues, findings: [] is what we want. Don't read it as a bug.
Try It
pnpm tsx src/analyze.tsExpected output (specific findings will vary, but the shape is fixed):
{
"overallRisk": "high",
"findings": [
{
"severity": "critical",
"category": "security",
"file": "src/auth.ts",
"summary": "Hardcoded admin password in login function",
"recommendation": "Replace the hardcoded check with a lookup against a securely hashed password store (bcrypt or argon2) and load the comparison value from environment configuration."
},
{
"severity": "high",
"category": "quality",
"file": "src/auth.ts",
"summary": "Function returns boolean instead of a typed user record",
"recommendation": "Return a discriminated union like { ok: true, user } | { ok: false, reason } so callers can react to specific failure modes."
}
]
}Put that next to the prose blob from 3.1 and the difference is obvious. Each finding has a severity you can sort by, a file you can jump to, a recommendation specific enough to act on. The schema did that.
Commit
git add src/analyze.ts
git commit -m "feat(analyze): generate structured reviews with generateObject"Done-When
analyzeRepository(files)is exported fromsrc/analyze.ts- It uses
generateObjectwithreviewSchema - Output is a typed
Reviewobject - Running against a known-bad file produces concrete, file-anchored findings
Solution
import { generateObject, generateText } from 'ai';
import { z } from 'zod';
export const findingSchema = z.object({
severity: z.enum(['low', 'medium', 'high', 'critical']),
category: z.enum(['security', 'quality', 'performance', 'reliability']),
file: z.string(),
summary: z.string(),
recommendation: z.string()
});
export const reviewSchema = z.object({
overallRisk: z.enum(['low', 'medium', 'high']),
findings: z.array(findingSchema)
});
export type Finding = z.infer<typeof findingSchema>;
export type Review = z.infer<typeof reviewSchema>;
export async function analyzeWithPromptV1(source: string): Promise<string> {
const result = await generateText({
model: 'openai/codex-5.3',
prompt: `Review this code and tell me what is wrong:\n\n${source}`
});
return result.text;
}
export async function analyzeRepository(
files: Array<{ path: string; content: string }>
): Promise<Review> {
const prompt = [
'You are a senior application security and code quality reviewer.',
'Return only findings that are directly supported by the provided source.',
'Prefer precise, actionable recommendations over generic advice.',
'If there are no findings, return an empty findings array.',
'',
...files.map((f) => `FILE: ${f.path}\n${f.content}`)
].join('\n');
const result = await generateObject({
model: 'openai/codex-5.3',
schema: reviewSchema,
prompt
});
return result.object;
}Was this helpful?