How-to
1 min read

Vercel Anyscale IntegrationMarketplace

Learn how to integrate Anyscale with Vercel.
Table of Contents

Anyscale Endpoints offers scalable computing solutions for handling complex, large-scale data processing tasks. It is particularly suited for distributed computing environments. The Vercel Anyscale integration enables you to leverage these capabilities in your applications, enhancing their scalability and computational efficiency.

Anyscale is temporarily unavailable in the Vercel Marketplace. Existing integrations will continue to work, but new integrations cannot be created at this time.

You can use the Vercel and Anyscale integration to power a variety of AI applications, including:

  • Distributed data processing: Use Anyscale for handling large-scale, distributed data processing tasks efficiently
  • Machine learning workflows: Use Anyscale for training and deploying complex machine learning models that require high computational power
  • Real-time analytics: Use Anyscale for real-time data analytics, ideal for applications needing quick data processing and insight extraction

Anyscale models provide scalable solutions for complex computational tasks, emphasizing efficiency in distributed computing and large-scale data processing across various industries.

CodeLlama-34b-Instruct

Type: Code

Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the 34B parameter version, fine tuned for instructions. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.

CodeLlama-70b-Instruct

Type: Code

Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the 70B parameter version, fine tuned for instructions. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.

Llama-2-13b-Chat

Type: Chat

Meta's Llama 2 13 billion paramater model

Llama-2-70b-Chat

Type: Chat

Meta's Llama 2 70 billion paramater model

Llama-2-7b-Chat

Type: Chat

Meta's Llama 2 7 billion paramater model

Mistral-7b-Instruct-v0.1

Type: Chat

The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of publicly available conversation datasets. This model supports function calling and JSON mode.

Mixtral-8x7B-Instruct-v0.1

Type: Chat

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.

NeuralHermes-2.5-Mistral-7B

Type: Chat

The mlabonne/NeuralHermes-2.5-Mistral-7B Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-Instruct-v0.1 generative text model using a variety of publicly available conversation datasets.

zephyr-7b-beta

Type: Chat

Zephyr-7B-β is the second model in the series, and is a fine-tuned version of mistralai/Mistral-7B-v0.1 that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO).

The Vercel Anyscale Endpoints integration can be accessed through the AI tab on your Vercel dashboard.

To follow this guide, you'll need the following:

  1. Navigate to the AI tab in your Vercel dashboard
  2. Select Anyscale Endpoints from the list of providers, and press Add
  3. Review the provider information, and press Add Provider
  4. You can now select which projects the provider will have access to. You can choose from All Projects or Specific Projects
    • If you select Specific Projects, you'll be prompted to select the projects you want to connect to the provider. The list will display projects associated with your scoped team
    • Multiple projects can be selected during this step
  5. Select the Connect to Project button
  6. You'll be redirected to the provider's website to complete the connection process
  7. Once the connection is complete, you'll be redirected back to the Vercel dashboard, and the provider integration dashboard page. From here you can manage your provider settings, view usage, and more
  8. Pull the environment variables into your project using Vercel CLI
    terminal
    vercel env pull .env.development.local
  9. Install the providers package
    pnpm
    yarn
    npm
    pnpm i openai ai
  10. Connect your project using the code below:
    Next.js (/app)
    Next.js (/pages)
    SvelteKit
    Other frameworks
    app/api/chat/route.ts
    // app/api/chat/route.ts
    import { OpenAIStream, StreamingTextResponse } from 'ai';
    import OpenAI from 'openai';
    const anyscale = new OpenAI({
    apiKey: process.env.OPENAI_API_KEY || '',
    baseURL: 'https://api.endpoints.anyscale.com/v1',
    });
    export async function POST(req: Request) {
    // Extract the `messages` from the body of the request
    const { messages } = await req.json();
    // Request the OpenAI-compatible API for the response based on the prompt
    const response = await anyscale.chat.completions.create({
    model: 'meta-llama/Llama-2-70b-chat-hf',
    stream: true,
    messages: messages,
    });
    // Convert the response into a friendly text-stream
    const stream = OpenAIStream(response);
    // Respond with the stream
    return new StreamingTextResponse(stream);
    }
  1. Add the provider to your page using the code below:
    Next.js (/app)
    Next.js (/pages)
    SvelteKit
    app/chat/page.tsx
    // app/chat/page.tsx
    'use client';
    import { useChat } from 'ai/react';
    export default function Chat() {
    const { messages, input, handleInputChange, handleSubmit } = useChat();
    return (
    <div>
    {messages.map((m) => (
    <div key={m.id}>
    {m.role === 'user' ? 'User: ' : 'AI: '}
    {m.content}
    </div>
    ))}
    <form onSubmit={handleSubmit}>
    <input
    value={input}
    placeholder="Say something..."
    onChange={handleInputChange}
    />
    </form>
    </div>
    );
    }
Last updated on June 19, 2024