Vercel Replicate IntegrationMarketplace
Learn how to integrate Replicate with Vercel.Replicate provides a platform for accessing and deploying a wide range of open-source artificial intelligence models. These models span various AI applications such as image and video processing, natural language processing, and audio synthesis. With the Vercel Replicate integration, you can incorporate these AI capabilities into your applications, enabling advanced functionalities and enhancing user experiences.
You can use the Vercel and Replicate integration to power a variety of AI applications, including:
- Content generation: Use Replicate for generating text, images, and audio content in creative and marketing applications
- Image and video processing: Use Replicate in applications for image enhancement, style transfer, or object detection
- NLP and chat-bots: Use Replicate's language processing models in chat-bots and natural language interfaces
Replicate models cover a broad spectrum of AI applications ranging from image and video processing to natural language processing and audio synthesis.
Blip
Type: Image
Generate image captions
Flux 1.1 Pro
Type: Image
Faster, better FLUX Pro. Text-to-image model with excellent image quality, prompt adherence, and output diversity.
Flux.1 Dev
Type: Image
A 12 billion parameter rectified flow transformer capable of generating images from text descriptions
Flux.1 Pro
Type: Image
State-of-the-art image generation with top of the line prompt following, visual quality, image detail and output diversity.
Flux.1 Schnell
Type: Image
The fastest image generation model tailored for local development and personal use
Ideogram v2
Type: Image
An excellent image model with state of the art inpainting, prompt comprehension and text rendering
Ideogram v2 Turbo
Type: Image
A fast image model with state of the art inpainting, prompt comprehension and text rendering.
Incredibly Fast Whisper
Type: Audio
whisper-large-v3, incredibly fast, powered by Hugging Face Transformers.
Llama 3 70B Instruct
Type: Chat
A 70 billion parameter language model from Meta, fine tuned for chat completions
Llama 3 8B Instruct
Type: Image
An 8 billion parameter language model from Meta, fine tuned for chat completions
Llama 3.1 405B Instruct
Type: Chat
Meta's flagship 405 billion parameter language model, fine-tuned for chat completions
LLaVA 13B
Type: Image
Visual instruction tuning towards large language and vision models with GPT-4 level capabilities
Moondream2
Type: Image
Moondream2 is a small vision language model designed to run efficiently on edge devices
Recraft V3
Type: Image
Recraft V3 (code-named red_panda) is a text-to-image model with the ability to generate long texts, and images in a wide list of styles. As of today, it is SOTA in image generation, proven by the Text-to-Image Benchmark by Artificial Analysis
Recraft V3 SVG
Type: Image
Recraft V3 SVG (code-named red_panda) is a text-to-image model with the ability to generate high quality SVG images including logotypes, and icons. The model supports a wide list of styles.
Sana
Type: Image
A fast image model with wide artistic range and resolutions up to 4096x4096
Stable Diffusion 3.5 Large
Type: Image
A text-to-image model that generates high-resolution images with fine details. It supports various artistic styles and produces diverse outputs from the same prompt, thanks to Query-Key Normalization.
Stable Diffusion 3.5 Large Turbo
Type: Image
A text-to-image model that generates high-resolution images with fine details. It supports various artistic styles and produces diverse outputs from the same prompt, with a focus on fewer inference steps
Stable Diffusion 3.5 Medium
Type: Image
2.5 billion parameter image model with improved MMDiT-X architecture
The Vercel Replicate integration can be accessed through the AI tab on your Vercel dashboard.
To follow this guide, you'll need the following:
- An existing Vercel project
- The latest version of Vercel CLI
pnpm i -g vercel@latest
- Navigate to the AI tab in your Vercel dashboard
- Select Replicate from the list of providers, and press Add
- Review the provider information, and press Add Provider
- You can now select which projects the provider will have access to. You can choose from All Projects or Specific Projects
- If you select Specific Projects, you'll be prompted to select the projects you want to connect to the provider. The list will display projects associated with your scoped team
- Multiple projects can be selected during this step
- Select the Connect to Project button
- You'll be redirected to the provider's website to complete the connection process
- Once the connection is complete, you'll be redirected back to the Vercel dashboard, and the provider integration dashboard page. From here you can manage your provider settings, view usage, and more
- Pull the environment variables into your project using Vercel CLI
terminal
vercel env pull .env.development.local
- Install the providers package
pnpm i replicate
- Connect your project using the code below:
app/api/predictions/route.ts// app/api/predictions/route.tsimport { NextResponse } from 'next/server';import Replicate from 'replicate';const replicate = new Replicate({auth: process.env.REPLICATE_API_TOKEN,});// In production and preview deployments (on Vercel), the VERCEL_URL environment variable is set.// In development (on your local machine), the NGROK_HOST environment variable is set.const WEBHOOK_HOST = process.env.VERCEL_URL? `https://${process.env.VERCEL_URL}`: process.env.NGROK_HOST;export async function POST(request) {if (!process.env.REPLICATE_API_TOKEN) {throw new Error('The REPLICATE_API_TOKEN environment variable is not set. See README.md for instructions on how to set it.',);}const { prompt } = await request.json();const options = {version: '8beff3369e81422112d93b89ca01426147de542cd4684c244b673b105188fe5f',input: { prompt },};if (WEBHOOK_HOST) {options.webhook = `${WEBHOOK_HOST}/api/webhooks`;options.webhook_events_filter = ['start', 'completed'];}// A prediction is the result you get when you run a model, including the input, output, and other detailsconst prediction = await replicate.predictions.create(options);if (prediction?.error) {return NextResponse.json({ detail: prediction.error }, { status: 500 });}return NextResponse.json(prediction, { status: 201 });}// app/api/predictions/[id]/route.tsimport { NextResponse } from 'next/server';import Replicate from 'replicate';const replicate = new Replicate({auth: process.env.REPLICATE_API_TOKEN,});// Poll for the prediction's statusexport async function GET(request, { params }) {const { id } = params;const prediction = await replicate.predictions.get(id);if (prediction?.error) {return NextResponse.json({ detail: prediction.error }, { status: 500 });}return NextResponse.json(prediction);}
- Add the provider to your page using the code below:
app/chat/page.tsx// app/chat/page.tsx'use client';import { useState } from 'react';import Image from 'next/image';const sleep = (ms) => new Promise((r) => setTimeout(r, ms));export default function Home() {const [prediction, setPrediction] = useState(null);const [error, setError] = useState(null);const handleSubmit = async (e) => {e.preventDefault();const response = await fetch('/api/predictions', {method: 'POST',headers: {'Content-Type': 'application/json',},body: JSON.stringify({prompt: e.target.prompt.value,}),});let prediction = await response.json();if (response.status !== 201) {setError(prediction.detail);return;}setPrediction(prediction);while (prediction.status !== 'succeeded' &&prediction.status !== 'failed') {await sleep(200);const response = await fetch('/api/predictions/' + prediction.id);prediction = await response.json();if (response.status !== 200) {setError(prediction.detail);return;}console.log({ prediction: prediction });setPrediction(prediction);}};return (<div className="container max-w-2xl mx-auto p-5"><h1 className="py-6 text-center font-bold text-2xl">Dream something with{' '}<a href="https://replicate.com/stability-ai/sdxl?utm_source=project&utm_project=getting-started">SDXL</a></h1><form className="w-full flex" onSubmit={handleSubmit}><inputtype="text"className="flex-grow"name="prompt"placeholder="Enter a prompt to display an image"/><button className="button" type="submit">Go!</button></form>{error && <div>{error}</div>}{prediction && (<>{prediction.output && (<div className="image-wrapper mt-5"><Imagesrc={prediction.output[prediction.output.length - 1]}alt="output"sizes="100vw"height={768}width={768}/></div>)}<p className="py-3 text-sm opacity-50">Status: {prediction.status}</p></>)}</div>);}
You can deploy a template to Vercel that uses a pre-trained model from Replicate:
Was this helpful?