---
title: "Generate Structured Reviews"
description: "Swap `generateText` for `generateObject`, pass the schema we built in 3.2, and watch the model return a typed object instead of a wall of advice. Same model, same source, dramatically more useful output."
canonical_url: "https://vercel.com/academy/vercel-sandbox/generate-structured-reviews"
md_url: "https://vercel.com/academy/vercel-sandbox/generate-structured-reviews.md"
docset_id: "vercel-academy"
doc_version: "1.0"
last_updated: "2026-05-17T20:17:55.635Z"
content_type: "lesson"
course: "vercel-sandbox"
course_title: "Vercel Sandbox"
prerequisites:  []
---

<agent-instructions>
Vercel Academy — structured learning, not reference docs.
Lessons are sequenced.
Adapt commands to the human's actual environment (OS, package manager, shell, editor) — detect from project context or ask, don't assume.
The lesson shows one path; if the human's project diverges, adapt concepts to their setup.
Preserve the learning goal over literal steps.
Quizzes are pedagogical — engage, don't spoil.
Quiz answers are included for your reference.
</agent-instructions>

# Generate Structured Reviews

# Generate Structured Reviews

Same model. Same code under review. New output shape.

In 3.1 we got back a paragraph. In this lesson we get back a typed object with severity levels, file paths, and concrete recommendations. The only thing that changed is which function we called and what we handed it.

## Outcome

Add `analyzeRepository(files)` to `src/analyze.ts`. It uses `generateObject` with the `reviewSchema` from 3.2, takes an array of `{ path, content }` files, and returns a typed `Review`.

## Fast Track

1. Import `generateObject` from `ai`.
2. Write `analyzeRepository(files)` that builds a prompt from the files and calls `generateObject({ schema, prompt, model })`.
3. Return `result.object` (typed as `Review`).

## Hands-on exercise

Open `src/analyze.ts` and add the new function. The schemas and `analyzeWithPromptV1` stay where they are:

```ts
import { generateObject, generateText } from 'ai';
import { z } from 'zod';

export const findingSchema = z.object({
  severity: z.enum(['low', 'medium', 'high', 'critical']),
  category: z.enum(['security', 'quality', 'performance', 'reliability']),
  file: z.string(),
  summary: z.string(),
  recommendation: z.string()
});

export const reviewSchema = z.object({
  overallRisk: z.enum(['low', 'medium', 'high']),
  findings: z.array(findingSchema)
});

export type Finding = z.infer<typeof findingSchema>;
export type Review = z.infer<typeof reviewSchema>;

export async function analyzeWithPromptV1(source: string): Promise<string> {
  const result = await generateText({
    model: 'openai/codex-5.3',
    prompt: `Review this code and tell me what is wrong:\n\n${source}`
  });

  return result.text;
}

export async function analyzeRepository(
  files: Array<{ path: string; content: string }>
): Promise<Review> {
  const prompt = [
    'You are a senior application security and code quality reviewer.',
    'Return only findings that are directly supported by the provided source.',
    'Prefer precise, actionable recommendations over generic advice.',
    'If there are no findings, return an empty findings array.',
    '',
    ...files.map((f) => `FILE: ${f.path}\n${f.content}`)
  ].join('\n');

  const result = await generateObject({
    model: 'openai/codex-5.3',
    schema: reviewSchema,
    prompt
  });

  return result.object;
}
```

Two things to point out.

The prompt is doing work the schema can't. The schema enforces the *shape* of the output, but it can't tell the model "be specific" or "don't make things up." That's what the system-style preamble is for. We're telling the model who it is, what to return, and what not to return. Those three sentences raise the quality of findings by a lot.

The file format inside the prompt (`FILE: path\ncontent\n`) is intentionally plain. We're not using JSON or YAML or anything fancy. Models are good at reading "FILE: x" headers because they look like a lot of the training data.

To verify, replace the temporary `main()` test caller at the bottom of `src/analyze.ts` with one that calls the new function:

```ts
async function main() {
  const files = [
    {
      path: 'src/auth.ts',
      content: `
        export function login(user: string, password: string) {
          if (password === 'admin') return true;
          return false;
        }
      `
    }
  ];

  const review = await analyzeRepository(files);
  console.log(JSON.stringify(review, null, 2));
}

main();
```

\*\*Warning: Troubleshooting: validation error from generateObject\*\*

If `generateObject` throws a schema validation error, the model returned something that didn't match the shape. Usually this means tightening the prompt ("only use the listed categories") or loosening the schema (allow more enum values).

\*\*Note: Troubleshooting: empty findings array\*\*

An empty array is a valid response. If the code you're reviewing actually has no issues, `findings: []` is what we want. Don't read it as a bug.

## Try It

```bash
pnpm tsx src/analyze.ts
```

Expected output (specific findings will vary, but the shape is fixed):

```json
{
  "overallRisk": "high",
  "findings": [
    {
      "severity": "critical",
      "category": "security",
      "file": "src/auth.ts",
      "summary": "Hardcoded admin password in login function",
      "recommendation": "Replace the hardcoded check with a lookup against a securely hashed password store (bcrypt or argon2) and load the comparison value from environment configuration."
    },
    {
      "severity": "high",
      "category": "quality",
      "file": "src/auth.ts",
      "summary": "Function returns boolean instead of a typed user record",
      "recommendation": "Return a discriminated union like { ok: true, user } | { ok: false, reason } so callers can react to specific failure modes."
    }
  ]
}
```

Put that next to the prose blob from 3.1 and the difference is obvious. Each finding has a severity you can sort by, a file you can jump to, a recommendation specific enough to act on. The schema did that.

## Commit

```bash
git add src/analyze.ts
git commit -m "feat(analyze): generate structured reviews with generateObject"
```

## Done-When

- [ ] `analyzeRepository(files)` is exported from `src/analyze.ts`
- [ ] It uses `generateObject` with `reviewSchema`
- [ ] Output is a typed `Review` object
- [ ] Running against a known-bad file produces concrete, file-anchored findings

## Solution

```ts title="src/analyze.ts"
import { generateObject, generateText } from 'ai';
import { z } from 'zod';

export const findingSchema = z.object({
  severity: z.enum(['low', 'medium', 'high', 'critical']),
  category: z.enum(['security', 'quality', 'performance', 'reliability']),
  file: z.string(),
  summary: z.string(),
  recommendation: z.string()
});

export const reviewSchema = z.object({
  overallRisk: z.enum(['low', 'medium', 'high']),
  findings: z.array(findingSchema)
});

export type Finding = z.infer<typeof findingSchema>;
export type Review = z.infer<typeof reviewSchema>;

export async function analyzeWithPromptV1(source: string): Promise<string> {
  const result = await generateText({
    model: 'openai/codex-5.3',
    prompt: `Review this code and tell me what is wrong:\n\n${source}`
  });

  return result.text;
}

export async function analyzeRepository(
  files: Array<{ path: string; content: string }>
): Promise<Review> {
  const prompt = [
    'You are a senior application security and code quality reviewer.',
    'Return only findings that are directly supported by the provided source.',
    'Prefer precise, actionable recommendations over generic advice.',
    'If there are no findings, return an empty findings array.',
    '',
    ...files.map((f) => `FILE: ${f.path}\n${f.content}`)
  ].join('\n');

  const result = await generateObject({
    model: 'openai/codex-5.3',
    schema: reviewSchema,
    prompt
  });

  return result.object;
}
```


---

[Full course index](/academy/llms.txt) · [Sitemap](/academy/sitemap.md)
