Vercel Logo

What Makes Docs Agent-Friendly

Think about how you use API docs. You land on a page, scan the headings, find the endpoint you need, glance at the params, and maybe copy a curl command. If the param names are slightly wrong in the example, you figure it out. If an error case isn't documented, you experiment. You fill in gaps from experience.

Agents don't do any of that.

Agents are highly literal and very trusting of everything you write in the docs. If the docs say the parameter is called course_slug but the API actually expects courseSlug, the agent will send course_slug and get a 400. If an error case isn't documented, the agent has no recovery strategy. If the examples use placeholder data like "string" instead of real values, the agent might send the literal string "string" as a parameter.

This isn't a flaw in the agent. It's a flaw in the docs.

Outcome

Understand the seven patterns that make API documentation agent-friendly and recognize that you won't write these docs by hand.

Fast Track

  1. Learn the seven patterns that separate agent-friendly docs from human-only docs
  2. See concrete before/after examples for each pattern
  3. Understand why you'll automate doc generation instead of maintaining docs manually

Endpoint signatures in code blocks

Agents parse code blocks reliably. Prose descriptions of URLs are error-prone. This works:

GET /api/feedback

This doesn't (for agents):

Send a GET request to the feedback endpoint to retrieve all entries.

Both are clear to a human. Only the first is unambiguous to an agent.

Every endpoint in your docs should start with a code block containing the HTTP method and path. No extra words, no surrounding explanation inside the block. The code block is the source of truth.

Parameters as tables

Agents extract structured data from markdown tables easily. Bullet lists with mixed formatting are much harder to parse consistently.

| Parameter   | Type   | Required | Description           |
|-------------|--------|----------|-----------------------|
| courseSlug  | string | no       | Filter by course slug |

Compare that to:

  • courseSlug (optional) - a string that filters by course

A human reads both. An agent reliably extracts from the table.

Tables give agents a consistent shape to parse: column headers as keys, rows as entries. Every query parameter, every request body field gets a row.

Curl examples with real values

Every example request should use values that could actually work. Not "string" or "example" or "YOUR_VALUE_HERE". Real data from your seed file.

curl -X POST "http://localhost:3000/api/feedback" \
  -H "Content-Type: application/json" \
  -d '{
    "courseSlug": "bread-baking",
    "lessonSlug": "scoring-dough",
    "rating": 5,
    "comment": "The lame technique demo was incredibly helpful.",
    "author": "Alex Turner"
  }'

An agent will copy these values as a template. If your example uses "YOUR_COURSE_HERE", the agent might literally send that string. Real values show the format, the casing, the expected data types, all at once.

Placeholder values are landmines

Agents treat example values as templates. If your curl example uses "example-slug", an agent might send that exact string to your API. Use values from your actual seed data so the examples work when copied verbatim.

Complete response bodies

Show the full JSON response for every endpoint. No ... or "and so on." Truncated examples teach agents to generate truncated requests.

{
  "id": "fb-001",
  "courseSlug": "knife-skills",
  "lessonSlug": "the-claw-grip",
  "rating": 5,
  "comment": "Finally understand why my onion cuts were uneven. The claw grip changed everything.",
  "author": "Priya Sharma",
  "createdAt": "2026-03-01T10:30:00Z"
}

Every field, every value, every time. The response example is how an agent learns the shape of your data.

Exhaustive error documentation

Every error response gets its own block with the status code, the condition that triggers it, and the exact response body.

**Error response (400), missing fields:**
 
\`\`\`json
{
  "error": "Missing required fields: courseSlug, lessonSlug, rating, comment, author"
}
\`\`\`
 
**Error response (400), invalid rating:**
 
\`\`\`json
{
  "error": "Rating must be a number between 1 and 5"
}
\`\`\`

The label format matters. Error response (STATUS), DESCRIPTION: gives agents three things at a glance: it's an error, the status code, and what triggers it. Agents need to know every possible error shape so they can handle failures programmatically.

A schema section

Parameter tables tell agents what an endpoint accepts. A schema section tells them the shape of every data type in the system.

## Schema
 
### Feedback
 
| Field       | Type   | Description                              |
|-------------|--------|------------------------------------------|
| id          | string | Unique identifier (e.g. "fb-001")        |
| courseSlug  | string | Slug of the course                       |
| lessonSlug  | string | Slug of the lesson                       |
| rating      | number | Integer from 1 to 5                      |
| comment     | string | Feedback text                            |
| author      | string | Name of the person                       |
| createdAt   | string | ISO 8601 timestamp                       |

The schema section is a contract. The parameter tables in each endpoint say "these are the fields you can send." The schema section says "this is what every field means, everywhere in the API." Together, they give an agent everything it needs to construct valid requests.

Include format hints ("ISO 8601 timestamp"), value constraints ("Integer from 1 to 5"), and example values where helpful. The agent doesn't read your TypeScript types. It reads the docs.

Workflow examples

Everything so far tells an agent how to call a single endpoint. But real tasks rarely involve just one call.

Say an agent needs to find the worst-performing lessons in a course. It needs to check the summary, filter the low ratings, and then pull the details. That's three endpoints in a specific order. If you don't spell it out, the agent has to figure out the sequence on its own. Sometimes it will. Sometimes it will try to do everything in one request and fail.

Workflow examples show agents how endpoints chain together to accomplish a task:

## Workflows
 
### Investigate low-rated feedback for a course
 
1. `GET /api/feedback/summary?courseSlug=knife-skills` — check the average rating and total entries
2. `GET /api/feedback?courseSlug=knife-skills&minRating=1` — pull all entries (minRating sets the floor, so 1 returns everything)
3. `GET /api/feedback/fb-003` — get the full details on a specific entry
 
### Submit and verify new feedback
 
1. `POST /api/feedback` — submit the feedback entry with all required fields
2. `GET /api/feedback/:id` — fetch the newly created entry using the `id` from the POST response
3. `GET /api/feedback/summary?courseSlug=bread-baking` — check updated stats for the course

Each step names the endpoint, the key parameters, and why you're making that call. The numbered sequence removes all ambiguity about what comes first.

This is the pattern agents are worst at inferring and best at following. Endpoint docs answer "how do I call this?" Workflows answer "how do I accomplish this?"

Workflows are task-oriented

Think about what someone would actually want to do with your API, not just what each endpoint does in isolation. The workflow section bridges the gap between "here are the endpoints" and "here's how to get things done."

The good news

Agent-friendly docs aren't worse for humans. They're better. Structured, example-heavy, explicit documentation helps everyone. You're not choosing between audiences. You're raising the bar.

Better for humans too

Every pattern we're optimizing for agents (consistent formatting, realistic examples, exhaustive error docs) is something human developers have wanted all along. Agent-friendly docs are good docs with more discipline.

You won't write these by hand

Seven patterns across every endpoint, with realistic values, complete responses, exhaustive error cases, and multi-step workflows. That's a lot of markdown to write and maintain. And the moment your API changes, the docs drift.

Here's where this course is headed: in Section 3, you'll build a skill that generates all of this automatically from your code. The patterns you learned in this lesson become the template. The skill becomes the engine. You'll never hand-write API docs again.

But before you build, you need to see what good looks like. That's next.

Try It

No code changes in this lesson. Review the seven patterns above. You'll apply them when building the docs endpoint and again when the skill generates docs automatically.

Commit

No code changes to commit.

Done-When

  • You can name the seven patterns that make docs agent-friendly: endpoint signatures, parameter tables, curl examples, complete responses, error documentation, schema section, workflow examples
  • You understand why real values matter more than placeholders in examples
  • You know why exhaustive error documentation is critical for agents
  • You understand that these docs will be generated by a skill, not written by hand

Solution

No code solution for this lesson. The patterns here become the template for the docs you'll implement in 2.2 and the skill you'll build in Section 3.