Mercury Coder Small Beta
Mercury Coder Small Beta is Inception's compact diffusion coding model. Mercury Coder Small Beta scores 90.0 on HumanEval and 84.8 on fill-in-the-middle (FIM).
import { streamText } from 'ai'
const result = streamText({ model: 'inception/mercury-coder-small', prompt: 'Why is the sky blue?'})Frequently Asked Questions
How does Mercury Coder Small Beta's diffusion approach differ from standard code models?
It generates a full draft, then refines all token positions in parallel over iterative passes. Standard code models generate tokens left to right, one at a time. This parallel approach enables higher throughput on the same hardware.
What is Mercury Coder Small Beta's fill-in-the-middle score?
84.8 on FIM benchmarks. FIM measures how well a model generates code that fits between an existing prefix and suffix, which maps to editor autocomplete.
How does Mercury Coder Small Beta perform on HumanEval?
90.0 on HumanEval in Inception's published Mercury Coder tables.
What throughput does Mercury Coder Small Beta achieve?
Live throughput metrics appear on this page.
Is Mercury Coder Small Beta suitable for multi-language coding tasks?
Yes. Its MultiPL-E score is 76.2 across multiple programming languages beyond Python, with Python-centric benchmarks showing its strongest results.
How does Mercury Coder Small Beta relate to Mercury 2?
Mercury Coder Small Beta is a smaller, coding-focused model from an earlier generation of the Mercury diffusion family. Mercury 2 is a later, broader reasoning model with a larger context window and tunable reasoning depth.
Where are the benchmark numbers published?
Inception published HumanEval, FIM, MBPP, and MultiPL-E figures for Mercury Coder in its Mercury announcement. See https://platform.inceptionlabs.ai.
What does Mercury Coder Small Beta cost?
Pricing appears on this page and updates as providers adjust their rates. AI Gateway routes traffic through the configured provider.