Vercel Logo

Write the Wrong Prompt on Purpose

The first AI review prompt usually reads like a text from a sleep-deprived intern, "look at this code and tell me if it is bad." Then the model replies with advice so generic it could apply to a toaster.

We're going to write exactly that prompt, run it, and look at what comes back. Not because it's the answer, but because it's the baseline. The structured-output version we build in 3.3 is going to feel like magic by comparison, and that comparison only lands if you've seen the bad version first.

Outcome

Create src/analyze.ts with an analyzeWithPromptV1 function that uses AI SDK v6's generateText and a deliberately vague prompt. Run it against a real file and read the (probably underwhelming) output.

Fast Track

  1. Install AI SDK v6 (ai) and load your API key.
  2. Create src/analyze.ts with analyzeWithPromptV1(source) that calls generateText.
  3. Call it from a quick test script with a real source file and print the result.

Hands-on exercise

The AI SDK (ai) is already in the starter's package.json. From scratch, you'd install it with:

pnpm add ai

Create src/analyze.ts:

import { generateText } from 'ai';
 
export async function analyzeWithPromptV1(source: string): Promise<string> {
  const result = await generateText({
    model: 'openai/codex-5.3',
    prompt: `Review this code and tell me what is wrong:\n\n${source}`
  });
 
  return result.text;
}

That's the whole "review" function. One prompt string, one model call, one text response. It will run. It will return something. The something will not be very useful.

Add a quick test script at the bottom of the file so you can run this without wiring it into the CLI yet:

async function main() {
  const source = `
    export function login(user: string, password: string) {
      if (password === 'admin') return true;
      return false;
    }
  `;
 
  const review = await analyzeWithPromptV1(source);
  console.log(review);
}
 
main();

We're feeding it an obviously bad piece of code (hardcoded "admin" password, no validation, no hashing) so the failure mode is easy to see.

Troubleshooting: missing API key

AI SDK reads credentials from env vars (OPENAI_API_KEY or whatever your gateway uses). If you get a 401, confirm the env var is loaded in your shell before running.

Troubleshooting: model not available

If openai/codex-5.3 returns a "model not found" error, your gateway or account may not have access. Swap to any model you do have access to for this lesson; the point isn't the specific model, it's the prompt shape.

Try It

pnpm tsx src/analyze.ts

Expected output (the exact text varies by run, but the shape is consistent):

This code has several issues to consider:

1. The password "admin" is hardcoded, which is a security concern.
2. There's no input validation on the user parameter.
3. The function doesn't use proper authentication patterns.
4. Consider adding error handling.
5. You may want to add types for better TypeScript usage.

Read that output. Then ask:

  • Is "consider adding error handling" actionable? What error?
  • Is "doesn't use proper authentication patterns" specific? Which patterns?
  • Could we feed this output to another tool? It's a wall of prose.
  • Would we want to compare two reviews? They'd be impossible to diff.

The model isn't wrong, it's just vague. That's what happens when we hand it an open-ended task and a vague prompt. In the next lesson we design a schema that forces the model to be specific.

Commit

git add src/analyze.ts
git commit -m "feat(analyze): add naive v1 prompt as a baseline"

Done-When

  • ai package available (already in the starter)
  • src/analyze.ts exports analyzeWithPromptV1
  • Running it produces some response (anything, even bad)
  • You read the response and noticed how vague it is

Solution

src/analyze.ts
import { generateText } from 'ai';
 
export async function analyzeWithPromptV1(source: string): Promise<string> {
  const result = await generateText({
    model: 'openai/codex-5.3',
    prompt: `Review this code and tell me what is wrong:\n\n${source}`
  });
 
  return result.text;
}

Was this helpful?

supported.