v0-1.0-md
v0-1.0-md is Vercel's medium-sized composite model in the v0 1.0 family. Context 128K tokens, max output 32K tokens. Pricing is $3.00 per million input tokens and $15.00 per million output tokens.
import { streamText } from 'ai'
const result = streamText({ model: 'vercel/v0-1.0-md', prompt: 'Why is the sky blue?'})What To Consider When Choosing a Provider
Zero Data Retention
AI Gateway supports Zero Data Retention for this model via direct gateway requests (BYOK is not included). To configure this, check the documentation.Authentication
AI Gateway authenticates requests using an API key or OIDC token. You do not need to manage provider credentials directly.
The v0 1.5 generation uses Anthropic Sonnet 4 and higher measured error-free generation rates. Decide whether v0-1.0-md's stable behavior matters more than the newer generation's changes for your use case.
When to Use v0-1.0-md
Best For
Web application scaffolding:
The composite architecture generates React and Next.js code that matches current framework APIs
Rapid prototyping:
Needs deployable code rather than code that needs heavy manual framework fixes
Production code generation:
The AutoFix layer cuts post-generation debugging
Teams on established v0 1.0 workflows:
Depend on consistent behavior from the Sonnet 3.7 base model
Full-stack development:
UI components, API routes, and data-connected features inside the v0 platform
Consider Alternatives When
Highest error-free rate:
V0-1.5-md lists 93.87% versus the 1.0 generation's lower baseline
Non-web development tasks:
V0 models target web frameworks, not general-purpose reasoning
General-purpose coding:
A frontier model like Claude or GPT fits backend, systems, or non-web work better
Newer framework coverage:
The 1.5 generation's retrieval layer tracks newer framework releases
Conclusion
v0-1.0-md applies retrieval, generation, and auto-correction in one pipeline instead of relying on a single model's static training data. For teams on v0 1.0 workflows, it still delivers framework-specific code generation. For new projects, the v0 1.5 generation reports higher error-free rates on the same architecture.
FAQ
Three layers: retrieval-augmented generation for framework documentation, Anthropic Sonnet 3.7 for code generation, and a custom AutoFix model that corrects errors before you see the output.
v0-1.5-md uses Anthropic Sonnet 4 and lists a 93.87% error-free generation rate. v0-1.0-md uses Sonnet 3.7. The composite layout is the same; the newer base model and retrieval updates change output quality.
vercel-autofixer-01 is a custom model that runs 10 to 40 times faster than models like gpt-4o-mini and gemini-2.5-flash and fixes framework-specific errors in generated code. It's the last step in the pipeline before the response returns.
It focuses on web development with React, Next.js, and related stacks. For coding outside that scope, use a general-purpose frontier model.
Add your API key in your AI Gateway project settings. Send requests with the identifier vercel/v0-1.0-md. AI Gateway routes and fails over across vercel.
Yes. The v0 platform supports GitHub import with environment and configuration pull from Vercel. That applies to both the 1.0 and 1.5 generations.
It targets React and Next.js, with retrieval updated as those frameworks change. It also covers adjacent web pieces (CSS, TypeScript, API routes) in full-stack flows.
Rates are listed on this page. They reflect the providers routing through AI Gateway and shift when providers update their pricing.