Skip to content

Morph V3 Large

morph/morph-v3-large

Morph V3 Large applies code edit suggestions to source files with about 98% merge accuracy on complex multi-scope edits. It supports 81.9K tokens input and 16.4K tokens output. On AI Gateway, pay $0.9 per million input tokens and $1.9 per million output tokens.

index.ts
import { streamText } from 'ai'
const result = streamText({
model: 'morph/morph-v3-large',
prompt: 'Why is the sky blue?'
})

What To Consider When Choosing a Provider

  • Zero Data Retention

    AI Gateway does not currently support Zero Data Retention for this model. See the documentation for models that support ZDR.

    Authentication

    AI Gateway authenticates requests using an API key or OIDC token. You do not need to manage provider credentials directly.

Like v3 Fast, v3 Large is a single-purpose code merging model. Place it in the file-write layer of a coding agent architecture, not as a standalone assistant.

When to Use Morph V3 Large

Best For

  • Multi-scope refactors:

    Edits modify several functions or classes within a single file at once

  • Repetitive pattern edits:

    Similar method signatures, templated code, or generated boilerplate where merge ambiguity is high

  • High-cost merge failures:

    Pipelines where a failed merge triggers broken CI, rollback, or manual review

  • Fast variant fallback:

    Fallback for edit patterns where v3 Fast has produced incorrect output

Consider Alternatives When

  • Simple single-scope edits:

    V3 Fast handles these at about twice the speed for lower cost

  • Hands-off routing:

    Morph's auto routing picks the model automatically

  • General-purpose model:

    You need a code generation or chat model rather than a merge tool

Conclusion

Morph V3 Large reduces bad merges on complex edits that break search-and-replace, trip faster models, or introduce subtle bugs. Use it when merge correctness has clear downstream cost. For simple edits, v3 Fast stays the cheaper default.

FAQ

Edits that span multiple functions or scopes, sit near duplicated structures, move logic between blocks, or touch overlapping regions. Simpler merge strategies, including v3 Fast in some cases, fail more often on those patterns.

Yes. A common pattern defaults to v3 Fast for speed and falls back to v3 Large when Fast errors or the edit is flagged complex. Morph's auto model automates that routing.

v3 Large uses on the order of 700 to 1,400 tokens per edit. A frontier model that rewrites the whole file often needs 3,500 to 4,500 tokens and several seconds. v3 Large stays faster and cheaper than a full rewrite on the merge task for typical setups.

Same as v3 Fast: an original file in <code> tags, an edit snippet in <update> tags using // ... existing code ... markers, and an optional <instruction> tag. The API follows the OpenAI Chat Completions format.

Rarely in practice. Even large single files stay well under 81.9K tokens. You won't hit this limit in typical merge operations.

Yes, for routine single-scope work v3 Fast is enough. v3 Large matters most on the smaller share of edits that are structurally hard and likely to break if merged wrong.

Current rates appear on this page. AI Gateway tracks live pricing across each provider that serves the model.