Skip to content

Seedance v1.0 Lite Image-to-Video

bytedance/seedance-v1.0-lite-i2v

Seedance v1.0 Lite Image-to-Video animates a still image into video. The source photograph anchors visual identity while a text prompt directs motion, camera work, and scene evolution. It's a cost-optimized path from static asset to moving content.

Video Gen
index.ts
import { experimental_generateVideo as generateVideo } from 'ai';
const result = await generateVideo({
model: 'bytedance/seedance-v1.0-lite-i2v',
prompt: 'A serene mountain lake at sunrise.'
});

What To Consider When Choosing a Provider

  • Zero Data Retention

    AI Gateway does not currently support Zero Data Retention for this model. See the documentation for models that support ZDR.

    Authentication

    AI Gateway authenticates requests using an API key or OIDC token. You do not need to manage provider credentials directly.

Before you scale an image-to-video pipeline to production, verify that your reference images meet the model's expected input format and resolution. Higher-resolution sources usually preserve identity better in the output. Compare rates (listed video cost N/A; tier lines N/A).

When to Use Seedance v1.0 Lite Image-to-Video

Best For

  • E-commerce product animation:

    Turn approved catalog photography into short video clips that show the product in motion

  • Social media content:

    Convert an existing brand image or illustration into a short-form video for platform distribution

  • Animation direction exploration:

    Test different prompts on a single source image to see how the same photograph can move in different ways

  • Marketing asset pipelines:

    Generate video variants from a locked hero image without commissioning new shoots or renders

Consider Alternatives When

  • No reference image:

    Use a T2V variant when the video should come entirely from a text description

  • Cinematic output:

    Seedance 1.0 Pro provides a higher motion-fidelity ceiling with advanced directorial controls

  • Video-based source:

    Reference-to-video models are designed for workflows where the source material is a video clip rather than a still image

Conclusion

Seedance v1.0 Lite Image-to-Video turns the still image you already have into the video you need. By anchoring on a source photograph rather than building from text alone, it preserves visual identity with less prompt engineering.

FAQ

The model accepts standard raster image formats. Higher-resolution source images provide more visual detail for the model to preserve during animation, so use the highest-quality version of your source photograph.

The image defines what the scene looks like: colors, subjects, and composition. The text prompt tells the model what should happen, including how the subject moves, where the camera goes, and what changes over time. The image anchors visual identity while the prompt directs temporal evolution.

Yes. The model works with any raster image input. Photographs, digital illustrations, 3D renders, and graphic designs can all serve as source material. Output quality depends on how much motion-relevant detail the source provides.

The model will attempt to animate the entire scene, including background elements. For cleaner results when the subject is the focus, images with simpler or more uniform backgrounds tend to produce more controlled motion in the output.

Clips range from two to 12 seconds at 24 fps. For most social and e-commerce uses, five- to eight-second clips provide enough time to showcase motion without losing viewer attention.

Lite prioritizes speed and cost efficiency for volume work and iteration. Pro I2V targets a higher motion-fidelity ceiling and suits hero or broadcast deliverables. Compare tiers (N/A; N/A).