Skip to content

Morph V3 Fast

Morph V3 Fast applies code edit suggestions from frontier models to your source files at high throughput. It supports 81.9K tokens input and 16.4K tokens output. On AI Gateway, pay $0.8 per million input tokens and $1.2 per million output tokens.

index.ts
import { streamText } from 'ai'
const result = streamText({
model: 'morph/morph-v3-fast',
prompt: 'Why is the sky blue?'
})

Frequently Asked Questions

  • What goes in, what comes out?

    You get the merged source file. Send the original file in <code> tags, an edit snippet in <update> tags with // ... existing code ... markers, and an optional <instruction>. The API follows the OpenAI Chat Completions format.

  • How does deletion work?

    v3 Fast treats omitted sections as removal. Leave a section out of the edit snippet and don't add a // ... existing code ... marker there.

  • What planning models pair well?

    Any model that emits lazy edit snippets. The marker pattern isn't proprietary, and mainstream code-generation models already produce it.

  • Will I hit the context limit?

    Unlikely for typical files. The window is 81.9K tokens, so even large single files rarely approach it.

  • How do I know an edit needs the heavier variant?

    Route when merges are wrong. Typical triggers include multi-scope refactors, edits inside heavily repeated patterns, and logic redistribution across functions. Keep v3 Fast as the default otherwise.

  • Where does the merge sit in the coding-agent pipeline?

    The merge step usually finishes before your planning model returns its next chunk. The bottleneck stays upstream, not in the merge.

  • Where are list prices for this model?

    Current rates appear on this page. AI Gateway tracks live pricing across each provider that serves the model.