VercelVercel
Menu

Video Generation Quickstart

Last updated February 19, 2026

This quickstart walks you through generating your first video with AI Gateway. Supported models include Veo, Kling, Wan, and Grok Imagine Video.

Video generation requires AI SDK v6. Check your ai package version with npm list ai and upgrade if needed.

  1. Create a new directory and initialize a Node.js project:

    Terminal
    mkdir ai-video-demo
    cd ai-video-demo
    pnpm init
  2. Install AI SDK v6 and development dependencies:

    Terminal
    npm install ai dotenv @types/node tsx typescript
    Terminal
    yarn add ai dotenv @types/node tsx typescript
    Terminal
    pnpm add ai dotenv @types/node tsx typescript
    Terminal
    bun add ai dotenv @types/node tsx typescript
  3. Go to the AI Gateway API Keys page in your Vercel dashboard and click Create key to generate a new API key.

    Create a .env.local file and save your API key:

    .env.local
    AI_GATEWAY_API_KEY=your_ai_gateway_api_key
  4. Create an index.ts file:

    index.ts
    import { experimental_generateVideo as generateVideo } from 'ai';
    import fs from 'node:fs';
    import 'dotenv/config';
     
    async function main() {
      const result = await generateVideo({
        model: 'google/veo-3.1-generate-001',
        prompt: 'A serene mountain landscape at sunset with clouds drifting by',
        aspectRatio: '16:9',
        duration: 8,
      });
     
      // Save the generated video
      fs.writeFileSync('output.mp4', result.videos[0].uint8Array);
     
      console.log('Video saved to output.mp4');
    }
     
    main().catch(console.error);

    Run your script:

    Terminal
    pnpm tsx index.ts

    Video generation can take several minutes. If you hit timeout issues, see extending timeouts for Node.js.

    The generated video will be saved as output.mp4 in your project directory.

Video models vary in their input formats and required parameters. Some accept buffers while others require URLs. Always check the Video Generation docs for model-specific requirements.

Transform a single image into a video by adding motion. The image becomes the video content itself.

image-to-video.ts
import { experimental_generateVideo as generateVideo } from 'ai';
import fs from 'node:fs';
import 'dotenv/config';
 
const result = await generateVideo({
  model: 'alibaba/wan-v2.6-i2v',
  prompt: {
    image: 'https://example.com/your-image.png',
    text: 'The scene slowly comes to life with gentle movement',
  },
  duration: 5,
});
 
fs.writeFileSync('output.mp4', result.videos[0].uint8Array);

Generate a video that transitions between a starting and ending image. The model interpolates the motion between them.

first-last-frame.ts
import { experimental_generateVideo as generateVideo } from 'ai';
import fs from 'node:fs';
import 'dotenv/config';
 
const firstFrame = fs.readFileSync('start.png');
const lastFrame = fs.readFileSync('end.png');
 
const result = await generateVideo({
  model: 'klingai/kling-v2.6-i2v',
  prompt: {
    image: firstFrame,
    text: 'Smooth transition between the two scenes',
  },
  providerOptions: {
    klingai: {
      imageTail: lastFrame,
      mode: 'pro',
    },
  },
});
 
fs.writeFileSync('output.mp4', result.videos[0].uint8Array);

Generate a new video scene featuring characters or content from reference media. References can be images or videos that show the model what your characters look like.

reference-to-video.ts
import { experimental_generateVideo as generateVideo } from 'ai';
import fs from 'node:fs';
import 'dotenv/config';
 
const result = await generateVideo({
  model: 'alibaba/wan-v2.6-r2v',
  prompt: 'character1 and character2 have a friendly conversation in a cozy cafe',
  resolution: '1920x1080',
  duration: 4,
  providerOptions: {
    alibaba: {
      // References can be images or videos
      referenceUrls: [
        'https://example.com/cat.png',
        'https://example.com/dog.png',
      ],
      shotType: 'single',
    },
  },
});
 
fs.writeFileSync('output.mp4', result.videos[0].uint8Array);

Some video models require URLs instead of raw file data for image or video inputs. You can use Vercel Blob to host your media files.

  1. Go to the Vercel dashboard
  2. Select your project (or create one)
  3. Click Storage in the top navigation
  4. Click Create Database and select Blob
  5. Follow the prompts to create your blob store
  6. Copy the BLOB_READ_WRITE_TOKEN to your .env.local file
.env.local
AI_GATEWAY_API_KEY=your_ai_gateway_api_key
BLOB_READ_WRITE_TOKEN=your_blob_token

Install the Vercel Blob package:

Terminal
pnpm add @vercel/blob
url-input.ts
import { experimental_generateVideo as generateVideo } from 'ai';
import { put } from '@vercel/blob';
import fs from 'node:fs';
import 'dotenv/config';
 
// Upload image to Vercel Blob
const imageBuffer = fs.readFileSync('input.png');
const { url: imageUrl } = await put('input.png', imageBuffer, {
  access: 'public',
});
 
const result = await generateVideo({
  model: 'klingai/kling-v2.6-i2v',
  prompt: {
    image: imageUrl, // Pass URL instead of buffer
    text: 'The scene slowly comes to life with gentle movement',
  },
  providerOptions: {
    klingai: {
      mode: 'std',
    },
  },
});
 
fs.writeFileSync('output.mp4', result.videos[0].uint8Array);

See the Vercel Blob docs for more details on uploading and managing files.

For more details, see the Video Generation Capabilities docs.


Was this helpful?

supported.