AI agents — tools like ChatGPT, Claude, Copilot, and Cursor — are becoming a primary way people discover and consume web content.
Agent readability is a set of best practices that make your website accessible to AI agents and assistants. It covers three areas:
- Discovery — Can agents find your pages? (
llms.txt, sitemaps,robots.txt) - Structure — Can agents parse your pages? (meta tags, headings, structured data, markdown mirrors)
- Context — Can agents understand your content? (skill files, content negotiation, code documentation)
This guide describes what to implement, why it matters, and how to verify each requirement.
- Serve an
llms.txtfile at your site root - Allow AI bots in
robots.txt - Publish a
sitemap.xmlwith<lastmod>dates - Publish a
sitemap.mdwith headings and links - Create an
AGENTS.mdwith install, config, and usage sections - Ensure all pages are discoverable from at least one source
- Return HTTP 200 with 0–1 redirects
- Set correct
Content-Typeheaders - Do not set restrictive
x-robots-tagvalues - Include a
<link rel="canonical">tag - Add
meta description(50+ characters),og:title,og:description, andhtml lang - Add Schema.org / JSON-LD structured data
- Use 3+ section headings (h1–h3) per page
- Maintain a text-to-HTML ratio above 15%
- Include a glossary or terminology link
- Provide markdown mirrors for HTML pages
- Add
<link rel="alternate" type="text/markdown">to HTML pages - Return
Linkheader withrel="canonical"from markdown endpoints - Support
Accept: text/markdowncontent negotiation - Include a
## Sitemapsection in markdown pages
- Fence all code blocks with language identifiers
- Link to OpenAPI/Swagger schemas from API reference pages
Your agent readability score measures how well your site meets these requirements:
Only checks with a pass status count toward the numerator. Checks that fail, warn, or error do not.
The total is the sum of all site-wide checks plus all per-page checks across every discovered page. Because per-page checks run on every page, sites with many pages have a larger denominator — a single failing check matters less on a large site, but a systemic issue (such as missing canonical links on every page) compounds significantly.
| Score | Rating | Meaning |
|---|---|---|
| 90–100 | Excellent | Highly optimized for AI agents. All critical checks pass. |
| 70–89 | Good | Meets most requirements. Address warnings to improve. |
| 50–69 | Fair | Has gaps. Review failed checks and implement fixes. |
| 0–49 | Needs Improvement | Significant work needed across multiple areas. |
These requirements apply once per site, at the root level.
What: Serve an llms.txt file that lists your documentation pages. This is the primary entry point for AI agents discovering your content.
Why: AI agents look for llms.txt as a machine-readable index of your site's content, similar to how search engines use sitemap.xml. Without it, agents must crawl your site to find pages, which is slower and less reliable.
Requirements:
- Serve the file at one of:
/llms.txt,/.well-known/llms.txt, or/docs/llms.txt - Alternatively, serve
llms-full.txtat the same paths - Use
text/plainas theContent-Type - The file must not be empty
- Listed URLs should use
.mdor.mdxextensions, not.html
Example:
How to verify:
What: Ensure your robots.txt does not block known AI bots.
Why: AI agents respect robots.txt directives. If you block them, your content will not be indexed or cited by AI assistants.
Requirements:
- Do not block
GPTBot,ClaudeBot,CCBot, orGoogle-Extended - Do not disallow
/llms.txt - Having no
robots.txtat all triggers a warning — it is better to explicitly allow access
Example:
How to verify:
What: Publish both a sitemap.xml and a sitemap.md to help agents understand your site structure.
Why: XML sitemaps are the standard for search engine crawlers. Markdown sitemaps give AI agents a structured, readable overview of your documentation hierarchy. Publishing both maximizes discoverability.
Requirements for sitemap.xml:
- Serve a valid XML sitemap with
<urlset>or<sitemapindex>containing<loc>entries - Include
<lastmod>dates so agents know which pages have changed
Requirements for sitemap.md:
- Serve at one of:
/sitemap.md,/docs/sitemap.md, or/.well-known/sitemap.md - Include headings and links that reflect your site's structure
Example sitemap.md:
How to verify:
What: Create an AGENTS.md file that gives coding agents direct context about your product — how to install it, configure it, and use it.
Why: Coding agents like Copilot, Claude Code, and Cursor use skill files to understand how to work with your product. A well-written skill file means agents can generate correct code for your users without guessing.
Requirements:
- Serve the file at one of:
/AGENTS.md,/agents.md,/.well-known/agents.md,/docs/AGENTS.md,/llms-full.txt,/CLAUDE.md,/.cursor/rules, or/.cursorrules - Include at least 2 of the following sections: installation instructions, configuration details, usage examples or code blocks
Example AGENTS.md:
Create a my-product.config.ts file in your project root:
What: Ensure every page on your site is reachable from at least one discovery source.
Why: Pages that are not linked from sitemap.xml, llms.txt, sitemap.md, or other pages cannot be found by agents. Orphaned pages are invisible to AI.
Requirements:
- Every page should appear in at least one of:
sitemap.xml,llms.txt,sitemap.md, or be reachable via links from other discoverable pages
How to verify: Cross-reference your page count against the URLs listed in your sitemaps and llms.txt. Any page not present in any source is undiscoverable to agents.
These requirements apply to every page on your site.
What: Ensure pages return clean HTTP responses that agents can process without issues.
Why: Agents follow redirects and inspect headers to decide whether to index a page. Broken responses, long redirect chains, or restrictive headers cause agents to skip your content.
Requirements:
- Return HTTP 200 for all live pages
- Limit redirect chains to 0–1 hops (2+ redirects cause failures)
- Set the correct
Content-Typeheader:- HTML pages:
text/html;charset=UTF-8 - Markdown pages:
text/plain;charset=UTF-8
- HTML pages:
- Do not include
noindex,noai, ornoimageaiin thex-robots-tagresponse header
How to verify:
What: Include proper metadata, structured data, and heading hierarchy so agents can understand each page's content and context.
Why: Meta tags tell agents what a page is about before they read the full content. Schema.org structured data provides machine-readable context like authorship, dates, and breadcrumbs. Headings create a scannable structure that agents use to extract sections relevant to a user's query. A high text-to-HTML ratio ensures the page contains real content rather than framework boilerplate.
Requirements:
Include <link rel="canonical" href="..."> on every page to tell agents which URL is authoritative.
Include all of the following:
<meta name="description" content="...">(at least 50 characters)<meta property="og:title" content="..."><meta property="og:description" content="...">langattribute on the<html>element
Schema.org / JSON-LD
Include a <script type="application/ld+json"> block with at minimum: title, description, canonical URL, dateModified, and BreadcrumbList.
Example JSON-LD:
Use 3 or more headings (h1–h3) per page to create a clear structure. Well-structured pages produce better embeddings and allow agents to extract specific sections.
Maintain a text-to-HTML ratio above 15%. Pages dominated by JavaScript bundles, framework boilerplate, or empty wrappers are harder for agents to parse.
Include a link to a glossary or terminology page. This helps agents resolve ambiguous terms in your content.
How to verify:
What: Provide markdown versions of your HTML pages and support content negotiation so agents can request the format they prefer.
Why: AI agents work natively with markdown. Raw HTML requires parsing and stripping away navigation, headers, footers, and scripts. A markdown mirror gives agents clean, structured content they can process directly — resulting in more accurate citations and better answers.
Requirements:
Markdown mirrors — For every HTML page, provide a corresponding .md or .mdx version. Include frontmatter metadata:
Alternate link in HTML — Add a <link> tag pointing to the markdown version:
Canonical link in markdown responses — When serving markdown files, include a Link HTTP header:
Content negotiation — Return markdown when the client sends an Accept: text/markdown header:
Sitemap section — Include a ## Sitemap heading in each markdown page with a link to /sitemap.md:
How to verify:
What: Fence all code blocks with language identifiers and link to machine-readable API schemas.
Why: Language-tagged code blocks let agents generate syntactically correct examples. API schema links (OpenAPI, Swagger) give agents the full contract of your API, enabling them to write integration code without guessing endpoints or parameters.
Requirements:
Every <pre><code> block should have a language-* or lang-* class:
In markdown, always specify the language after the opening fence:
For technical documentation sites, an AI-powered analysis can evaluate your content against documentation best practices:
- Voice and tone — Is the writing clear, direct, and consistent?
- Formatting — Are lists, tables, and code examples used effectively?
- Examples — Does the documentation include runnable code samples?
- Units and precision — Are measurements, limits, and thresholds clearly stated?
- Pricing clarity — If applicable, is pricing information unambiguous?
This analysis is opt-in and requires an AI model to evaluate your content. It is most useful for sites with extensive technical documentation where writing quality directly impacts whether agents can extract accurate information.
- llms.txt specification — The standard for providing LLM-readable content
- Schema.org documentation — Structured data vocabulary for the web
- OpenAPI specification — Machine-readable API documentation standard
- robots.txt specification — The standard for web crawler access control