You send the original file, an edit snippet with // ... existing code ... markers, and an optional instruction. You get back the merged file.
Live throughput metrics appear on this page. Each edit uses on the order of 700 to 1,400 tokens, compared to 3,500 to 4,500 for a frontier model rewriting the whole file. The context window is 81.9K tokens; max output is 16.4K tokens.
Routine edits merge reliably: parameter additions, function body swaps, line insertions, and deletions. Harder cases, like logic redistribution across scopes or edits buried in duplicated structures, belong to the heavier variant or Morph's auto router.
``
Planning model --> edit snippet --> v3 Fast --> merged file --> disk
``
Any planning model that outputs lazy edit snippets works. Many tools use the same // ... existing code ... pattern. Drop v3 Fast into the file-write step. Your planning model stays the bottleneck, not the merge. Product details and benchmarks appear on https://morphllm.com/.