Menu

Vercel Agent Code Review

Last updated October 28, 2025

Vercel Agent Code Review is available in Beta on Enterprise and Pro plans

AI Code Review is part of Vercel Agent, a suite of AI-powered development tools. When you open a pull request, it automatically analyzes your changes using multi-step reasoning to catch security vulnerabilities, logic errors, and performance issues.

It generates patches and runs them in secure sandboxes with your real builds, tests, and linters to validate fixes before suggesting them. Only validated suggestions that pass these checks appear in your PR, allowing you to apply specific code changes with one click.

To enable code reviews for your repositories, navigate to the Agent tab of the dashboard.

  1. Click Enable to turn on Vercel Agent.
  2. Under Repositories, choose which repositories to review:
    • All repositories (default)
    • Public only
    • Private only
  3. Under Review Draft PRs, select whether to:
    • Skip draft PRs (default)
    • Review draft PRs
  4. Optionally, configure Auto-Recharge to keep your balance topped up automatically:
    • Set the threshold for When Balance Falls Below
    • Set the amount for Recharge To Target Balance
    • Optionally, add a Monthly Spending Limit
  5. Click Save to confirm your settings.

Once you've set up Code Review, it will automatically review pull requests in repositories connected to your Vercel projects.

Code Review runs automatically when:

  • A pull request is created
  • A batch of commits is pushed to an open PR
  • A draft PR is created, if you've enabled draft reviews in your settings

When triggered, Code Review analyzes all human-readable files in your codebase, including:

  • Source code files (JavaScript, TypeScript, Python, etc.)
  • Test files
  • Configuration files (, YAML files, etc.)
  • Documentation (markdown files, README files)
  • Comments within code

The AI uses your entire codebase as context to understand how your changes fit into the larger system.

Code Review then generates patches, runs them in secure sandboxes, and executes your real builds, tests, and linters. Only validated suggestions that pass these checks appear in your PR.

Code Review automatically detects and applies coding guidelines from your repository. When guidelines are found, they're used during review to ensure feedback aligns with your project's conventions.

Code Review looks for these files in priority order (highest to lowest):

FileDescription
OpenAI Codex / universal standard
Claude Code instructions
GitHub Copilot
Cursor rules
Cursor (legacy)
Windsurf
Windsurf (directory)
Cline
GitHub Copilot workspace
Roo Code
JetBrains AI Assistant
Aider
Generic rules
Generic agent file

When multiple guideline files exist in the same directory, the highest-priority file is used.

  • Hierarchical: Guidelines from parent directories are inherited. A at the root applies to all files, while a adds additional context for that directory.
  • Scoped: Guidelines only affect files within their directory subtree. A guideline in won't apply to files in .
  • Nested references: Guidelines can reference other files using or relative markdown links. Referenced files are automatically included as context.
  • Size limit: Guidelines are capped at 50 KB total.

Guidelines should focus on project-specific conventions that help the reviewer understand your codebase:

  • Code style preferences not enforced by linters
  • Architecture patterns and design decisions
  • Common pitfalls specific to your project
  • Testing requirements and patterns

Guidelines are treated as context, not instructions. The reviewer's core behavior (identifying bugs, security issues, and performance problems) takes precedence over any conflicting guideline content.

Check out Managing Reviews for details on how to customize which repositories get reviewed and monitor your review metrics and spending.

Code Review uses a credit-based system. Each review costs a fixed $0.30 USD plus token costs billed at the Agent's underlying AI provider's rate, with no additional markup. The token cost varies based on how complex your changes are and how much code the AI needs to analyze.

Pro teams can redeem a $100 USD promotional credit when enabling Agent. You can purchase credits and enable auto-reload in the Agent tab of your dashboard. For complete pricing details, credit management, and cost tracking information, see Vercel Agent Pricing.

Code Review doesn't store or train on your data. It only uses LLMs from providers on our subprocessor list, and we have agreements in place that don't allow them to train on your data.


Was this helpful?

supported.