How-to
1 min read

Vercel Replicate IntegrationMarketplace

Learn how to integrate Replicate with Vercel.
Table of Contents

Replicate provides a platform for accessing and deploying a wide range of open-source artificial intelligence models. These models span various AI applications such as image and video processing, natural language processing, and audio synthesis. With the Vercel Replicate integration, you can incorporate these AI capabilities into your applications, enabling advanced functionalities and enhancing user experiences.

You can use the Vercel and Replicate integration to power a variety of AI applications, including:

  • Content generation: Use Replicate for generating text, images, and audio content in creative and marketing applications
  • Image and video processing: Use Replicate in applications for image enhancement, style transfer, or object detection
  • NLP and chat-bots: Use Replicate's language processing models in chat-bots and natural language interfaces

Replicate models cover a broad spectrum of AI applications ranging from image and video processing to natural language processing and audio synthesis.

Incredibly Fast Whisper

Type: Audio

whisper-large-v3, incredibly fast, powered by Hugging Face Transformers

bge-large-en-v1.5

Type: Chat

BAAI's bge-en-large-v1.5 for embedding text sequences

Blip-2

Type: Image

A large language model that answers questions about images

GFPGAN

Type: Image

Practical face restoration algorithm for old photos or AI-generated faces

Insanely Fast Whisper

Type: Audio

whisper-large-v3, insanely fast, powered by Hugging Face Transformers

Llama-2-70b-Chat

Type: Chat

A 70 billion parameter language model from Meta, fine tuned for chat completions

llama-3-70b-instruct

Type: Chat

A 70 billion parameter language model from Meta, fine tuned for chat completions

llama-3-8b-instruct

Type: Chat

A 8 billion parameter language model from Meta, fine tuned for chat completions

LLaVA v1.5: Large Language and Vision Assistant

Type: Image

Visual instruction tuning towards large language and vision models with GPT-4 level capabilities

Mixtral-8x7B-instruct-v0.1

Type: Chat

A pretrained generative Sparse Mixture of Experts tuned to be a helpful assistant.

MusicGen

Type: Audio

Generate music from a prompt or melody

SDXL

Type: Image

A text-to-image generative AI model that creates beautiful images

XTTS v2

Type: Audio

Coqui XTTS-v2: Multilingual Text To Speech Voice Cloning

The Vercel Replicate integration can be accessed through the AI tab on your Vercel dashboard.

To follow this guide, you'll need the following:

  1. Navigate to the AI tab in your Vercel dashboard
  2. Select Replicate from the list of providers, and press Add
  3. Review the provider information, and press Add Provider
  4. You can now select which projects the provider will have access to. You can choose from All Projects or Specific Projects
    • If you select Specific Projects, you'll be prompted to select the projects you want to connect to the provider. The list will display projects associated with your scoped team
    • Multiple projects can be selected during this step
  5. Select the Connect to Project button
  6. You'll be redirected to the provider's website to complete the connection process
  7. Once the connection is complete, you'll be redirected back to the Vercel dashboard, and the provider integration dashboard page. From here you can manage your provider settings, view usage, and more
  8. Pull the environment variables into your project using Vercel CLI
    terminal
    vercel env pull .env.development.local
  9. Install the providers package
    pnpm
    yarn
    npm
    pnpm i replicate ai
  10. Connect your project using the code below:
    Next.js (/app)
    Next.js (/pages)
    SvelteKit
    Other frameworks
    app/api/chat/route.ts
    // app/api/chat/route.ts
    import { ReplicateStream, StreamingTextResponse } from 'ai';
    import Replicate from 'replicate';
    import { experimental_buildLlama2Prompt } from 'ai/prompts';
    const replicate = new Replicate({
    auth: process.env.REPLICATE_API_KEY || '',
    });
    export const runtime = 'edge';
    export async function POST(req: Request) {
    const { messages } = await req.json();
    const response = await replicate.predictions.create({
    stream: true,
    version: '2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1',
    input: {
    prompt: experimental_buildLlama2Prompt(messages),
    },
    });
    const stream = await ReplicateStream(response);
    return new StreamingTextResponse(stream);
    }
  1. Add the provider to your page using the code below:
    Next.js (/app)
    Next.js (/pages)
    SvelteKit
    app/chat/page.tsx
    // app/chat/page.tsx
    'use client';
    import { useChat } from 'ai/react';
    export default function Chat() {
    const { messages, input, handleInputChange, handleSubmit } = useChat();
    return (
    <div>
    {messages.map((m) => (
    <div key={m.id}>
    {m.role === 'user' ? 'User: ' : 'AI: '}
    {m.content}
    </div>
    ))}
    <form onSubmit={handleSubmit}>
    <input
    value={input}
    placeholder="Say something..."
    onChange={handleInputChange}
    />
    </form>
    </div>
    );
    }

You can deploy a template to Vercel that uses a pre-trained model from Replicate:

Last updated on May 2, 2024