FLUX.2 [flex]
FLUX.2 [flex] is Black Forest Labs's developer-tunable image generation model. It exposes direct control over inference steps and guidance scale, letting you trade typography accuracy and fine detail against generation speed within a single model.
import { experimental_generateImage as generateImage } from 'ai';
const result = await generateImage({ model: 'bfl/flux-2-flex', prompt: 'A red balloon on a wooden table.'});What To Consider When Choosing a Provider
Zero Data Retention
AI Gateway does not currently support Zero Data Retention for this model. See the documentation for models that support ZDR.Authentication
AI Gateway authenticates requests using an API key or OIDC token. You do not need to manage provider credentials directly.
Because FLUX.2 [flex] exposes the steps parameter, you can run a quick low-step pass to validate a composition before you commit to a full-quality render. That cuts iteration cost during prompt development. Compare N/A against other FLUX.2 tiers.
When to Use FLUX.2 [flex]
Best For
Configurable quality-speed tradeoff:
Single model that supports draft previews at six steps and final renders at 50 steps
Typography accuracy:
Applications like UI mockup generation or infographic creation where fine text legibility matters
Iterative prompt development:
Rapid low-step passes validate composition before committing to full-quality generation
Quality slider interfaces:
Developer tools that expose a quality control mapped to inference step count
Consider Alternatives When
Maximum automatic quality:
FLUX.2 Pro handles this with tuned default parameters when you don't want to manage inference parameters
Image inpainting:
FLUX.1 Fill Pro is purpose-built for masked region fill tasks
Real-time generation:
FLUX.2 Klein is optimized for sub-second interactive speed
Conclusion
FLUX.2 [flex] gives you direct control over inference steps and guidance scale. Choose it when your application needs to tune the quality-speed tradeoff at runtime rather than commit to a fixed output tier. Its strength in text rendering and fine detail makes it especially useful for structured visual content like UI mockups and infographics.
FAQ
Flex exposes the steps parameter, which controls the number of diffusion inference steps. Fewer steps produce faster outputs with softer details. More steps yield sharper typography and finer image detail. This runtime tuning is Flex's core differentiator within the FLUX.2 family.
Yes. All FLUX.2 variants, including Flex, support image editing from text and multiple references in a single model.
Up to 10 simultaneous reference images. Use them to keep character, product, and style aligned across generations.
Up to 4 megapixels. All FLUX.2 models share this resolution ceiling for both generation and editing.
Use Flex when you need explicit control over the steps parameter to tune quality vs. speed at request time. Use FLUX.2 Pro when you want maximum automatic image quality without managing inference parameters.
Yes. FLUX.2 [flex] is a dedicated image generation model (a rectified-flow transformer), not a multimodal large language model (LLM). It doesn't process conversation history and has no context window for chat. Use generateImage from the AI SDK to send requests.
Pricing appears on this page and updates as providers adjust their rates. AI Gateway routes traffic through the configured provider.