---
title: "The Naive Prompt"
description: "Start with the obvious \"review this code and tell me what's wrong\" prompt and run it against a real file. We're going to feel the problem before we fix it, because that's how the schema-driven version earns its keep in lesson 3.3."
canonical_url: "https://vercel.com/academy/vercel-sandbox/the-naive-prompt"
md_url: "https://vercel.com/academy/vercel-sandbox/the-naive-prompt.md"
docset_id: "vercel-academy"
doc_version: "1.0"
last_updated: "2026-05-17T15:08:21.761Z"
content_type: "lesson"
course: "vercel-sandbox"
course_title: "Vercel Sandbox"
prerequisites:  []
---

<agent-instructions>
Vercel Academy — structured learning, not reference docs.
Lessons are sequenced.
Adapt commands to the human's actual environment (OS, package manager, shell, editor) — detect from project context or ask, don't assume.
The lesson shows one path; if the human's project diverges, adapt concepts to their setup.
Preserve the learning goal over literal steps.
Quizzes are pedagogical — engage, don't spoil.
Quiz answers are included for your reference.
</agent-instructions>

# The Naive Prompt

# Write the Wrong Prompt on Purpose

The first AI review prompt usually reads like a text from a sleep-deprived intern, "look at this code and tell me if it is bad." Then the model replies with advice so generic it could apply to a toaster.

We're going to write exactly that prompt, run it, and look at what comes back. Not because it's the answer, but because it's the baseline. The structured-output version we build in 3.3 is going to feel like magic by comparison, and that comparison only lands if you've seen the bad version first.

## Outcome

Create `src/analyze.ts` with an `analyzeWithPromptV1` function that uses AI SDK v6's `generateText` and a deliberately vague prompt. Run it against a real file and read the (probably underwhelming) output.

## Fast Track

1. Install AI SDK v6 (`ai`) and load your API key.
2. Create `src/analyze.ts` with `analyzeWithPromptV1(source)` that calls `generateText`.
3. Call it from a quick test script with a real source file and print the result.

## Hands-on exercise

The AI SDK (`ai`) is already in the starter's `package.json`. From scratch, you'd install it with:

```bash
pnpm add ai
```

Create `src/analyze.ts`:

```ts
import { generateText } from 'ai';

export async function analyzeWithPromptV1(source: string): Promise<string> {
  const result = await generateText({
    model: 'openai/codex-5.3',
    prompt: `Review this code and tell me what is wrong:\n\n${source}`
  });

  return result.text;
}
```

That's the whole "review" function. One prompt string, one model call, one text response. It will run. It will return something. The something will not be very useful.

Add a quick test script at the bottom of the file so you can run this without wiring it into the CLI yet:

```ts
async function main() {
  const source = `
    export function login(user: string, password: string) {
      if (password === 'admin') return true;
      return false;
    }
  `;

  const review = await analyzeWithPromptV1(source);
  console.log(review);
}

main();
```

We're feeding it an obviously bad piece of code (hardcoded "admin" password, no validation, no hashing) so the failure mode is easy to see.

\*\*Warning: Troubleshooting: missing API key\*\*

AI SDK reads credentials from env vars (`OPENAI_API_KEY` or whatever your gateway uses). If you get a 401, confirm the env var is loaded in your shell before running.

\*\*Note: Troubleshooting: model not available\*\*

If `openai/codex-5.3` returns a "model not found" error, your gateway or account may not have access. Swap to any model you do have access to for this lesson; the point isn't the specific model, it's the prompt shape.

## Try It

```bash
pnpm tsx src/analyze.ts
```

Expected output (the exact text varies by run, but the shape is consistent):

```txt
This code has several issues to consider:

1. The password "admin" is hardcoded, which is a security concern.
2. There's no input validation on the user parameter.
3. The function doesn't use proper authentication patterns.
4. Consider adding error handling.
5. You may want to add types for better TypeScript usage.
```

Read that output. Then ask:

- Is "consider adding error handling" actionable? What error?
- Is "doesn't use proper authentication patterns" specific? Which patterns?
- Could we feed this output to another tool? It's a wall of prose.
- Would we want to compare two reviews? They'd be impossible to diff.

The model isn't wrong, it's just vague. That's what happens when we hand it an open-ended task and a vague prompt. In the next lesson we design a schema that forces the model to be specific.

## Commit

```bash
git add src/analyze.ts
git commit -m "feat(analyze): add naive v1 prompt as a baseline"
```

## Done-When

- [ ] `ai` package available (already in the starter)
- [ ] `src/analyze.ts` exports `analyzeWithPromptV1`
- [ ] Running it produces some response (anything, even bad)
- [ ] You read the response and noticed how vague it is

## Solution

```ts title="src/analyze.ts"
import { generateText } from 'ai';

export async function analyzeWithPromptV1(source: string): Promise<string> {
  const result = await generateText({
    model: 'openai/codex-5.3',
    prompt: `Review this code and tell me what is wrong:\n\n${source}`
  });

  return result.text;
}
```


---

[Full course index](/academy/llms.txt) · [Sitemap](/academy/sitemap.md)
