Morph V3 Large
Morph V3 Large applies code edit suggestions to source files with about 98% merge accuracy on complex multi-scope edits. It supports 81.9K tokens input and 16.4K tokens output. On AI Gateway, pay $0.9 per million input tokens and $1.9 per million output tokens.
import { streamText } from 'ai'
const result = streamText({ model: 'morph/morph-v3-large', prompt: 'Why is the sky blue?'})Frequently Asked Questions
What makes a code edit "complex" enough to warrant v3 Large?
Edits that span multiple functions or scopes, sit near duplicated structures, move logic between blocks, or touch overlapping regions. Simpler merge strategies, including v3 Fast in some cases, fail more often on those patterns.
Can I use both v3 Fast and v3 Large in the same pipeline?
Yes. A common pattern defaults to v3 Fast for speed and falls back to v3 Large when Fast errors or the edit is flagged complex. Morph's
automodel automates that routing.How does v3 Large compare to using a frontier model for full-file rewrites?
v3 Large uses on the order of 700 to 1,400 tokens per edit. A frontier model that rewrites the whole file often needs 3,500 to 4,500 tokens and several seconds. v3 Large stays faster and cheaper than a full rewrite on the merge task for typical setups.
What is the input format?
Same as v3 Fast: an original file in
<code>tags, an edit snippet in<update>tags using// ... existing code ...markers, and an optional<instruction>tag. The API follows the OpenAI Chat Completions format.Does the context window of 81.9K tokens matter for code merging?
Rarely in practice. Even large single files stay well under 81.9K tokens. You won't hit this limit in typical merge operations.
Is v3 Large overkill for most edits?
Yes, for routine single-scope work v3 Fast is enough. v3 Large matters most on the smaller share of edits that are structurally hard and likely to break if merged wrong.
Where are list prices for this model?
Current rates appear on this page. AI Gateway tracks live pricing across each provider that serves the model.