VercelLogotypeVercelLogotype
LoginSign Up

Open Source AI Artifacts and Code Execution

This is an open source AI app version of Anthropic's Artifacts UI in their Claude chat app.

DeployView Demo
A screenshot of https://artifacts.e2b.dev/

Fragments by E2B

This is an open-source version of apps like Anthropic's Claude Artifacts, Vercel v0, or GPT Engineer.

Powered by the E2B SDK.

→ Try on fragments.e2b.dev

Features

  • Based on Next.js 14 (App Router, Server Actions), shadcn/ui, TailwindCSS, Vercel AI SDK.
  • Uses the E2B SDK by E2B to securely execute code generated by AI.
  • Streaming in the UI.
  • Can install and use any package from npm, pip.
  • Supported stacks (add your own):
    • 🔸 Python interpreter
    • 🔸 Next.js
    • 🔸 Vue.js
    • 🔸 Streamlit
    • 🔸 Gradio
  • Supported LLM Providers (add your own):
    • 🔸 OpenAI
    • 🔸 Anthropic
    • 🔸 Google AI
    • 🔸 Mistral
    • 🔸 Groq
    • 🔸 Fireworks
    • 🔸 Together AI
    • 🔸 Ollama
  • Integrates with Morph Apply model for token efficient, accurate and faster code editing.

Make sure to give us a star!

Get started

Prerequisites

  • git
  • Recent version of Node.js and npm package manager
  • E2B API Key
  • LLM Provider API Key

1. Clone the repository

In your terminal:

git clone https://github.com/e2b-dev/fragments.git

2. Install the dependencies

Enter the repository:

cd fragments

Run the following to install the required dependencies:

npm i

3. Set the environment variables

Create a .env.local file and set the following:

# Get your API key here - https://e2b.dev/
E2B_API_KEY="your-e2b-api-key"
# OpenAI API Key
OPENAI_API_KEY=
# Other providers
ANTHROPIC_API_KEY=
GROQ_API_KEY=
FIREWORKS_API_KEY=
TOGETHER_API_KEY=
GOOGLE_AI_API_KEY=
GOOGLE_VERTEX_CREDENTIALS=
MISTRAL_API_KEY=
XAI_API_KEY=
### Optional env vars
# (on by default) Get your MORPH key here - https://morphllm.com/dashboard/api-keys
MORPH_API_KEY=
# Domain of the site
NEXT_PUBLIC_SITE_URL=
# Rate limit
RATE_LIMIT_MAX_REQUESTS=
RATE_LIMIT_WINDOW=
# Vercel/Upstash KV (short URLs, rate limiting)
KV_REST_API_URL=
KV_REST_API_TOKEN=
# Supabase (auth)
SUPABASE_URL=
SUPABASE_ANON_KEY=
# PostHog (analytics)
NEXT_PUBLIC_POSTHOG_KEY=
NEXT_PUBLIC_POSTHOG_HOST=
### Disabling functionality (when uncommented)
# Disable API key and base URL input in the chat
# NEXT_PUBLIC_NO_API_KEY_INPUT=
# NEXT_PUBLIC_NO_BASE_URL_INPUT=
# Hide local models from the list of available models
# NEXT_PUBLIC_HIDE_LOCAL_MODELS=

4. Start the development server

npm run dev

5. Build the web app

npm run build

Customize

Adding custom personas

  1. Make sure E2B CLI is installed and you're logged in.

  2. Add a new folder under sandbox-templates/ [blocked]

  3. Initialize a new template using E2B CLI:

    e2b template init

    This will create a new file called e2b.Dockerfile.

  4. Adjust the e2b.Dockerfile

    Here's an example streamlit template:

    # You can use most Debian-based base images
    FROM python:3.19-slim
    RUN pip3 install --no-cache-dir streamlit pandas numpy matplotlib requests seaborn plotly
    # Copy the code to the container
    WORKDIR /home/user
    COPY . /home/user
  5. Specify a custom start command in e2b.toml:

    start_cmd = "cd /home/user && streamlit run app.py"
  6. Deploy the template with the E2B CLI

    e2b template build --name <template-name>

    After the build has finished, you should get the following message:

    ✅ Building sandbox template <template-id> <template-name> finished.
  7. Open lib/templates.json [blocked] in your code editor.

    Add your new template to the list. Here's an example for Streamlit:

    "streamlit-developer": {
    "name": "Streamlit developer",
    "lib": [
    "streamlit",
    "pandas",
    "numpy",
    "matplotlib",
    "requests",
    "seaborn",
    "plotly"
    ],
    "file": "app.py",
    "instructions": "A streamlit app that reloads automatically.",
    "port": 8501 // can be null
    },

    Provide a template id (as key), name, list of dependencies, entrypoint and a port (optional). You can also add additional instructions that will be given to the LLM.

  8. Optionally, add a new logo under public/thirdparty/templates [blocked]

Adding custom LLM models

  1. Open lib/models.json [blocked] in your code editor.

  2. Add a new entry to the models list:

    {
    "id": "mistral-large",
    "name": "Mistral Large",
    "provider": "Ollama",
    "providerId": "ollama"
    }

    Where id is the model id, name is the model name (visible in the UI), provider is the provider name and providerId is the provider tag (see adding providers below).

Adding custom LLM providers

  1. Open lib/models.ts [blocked] in your code editor.

  2. Add a new entry to the providerConfigs list:

    Example for fireworks:

    fireworks: () => createOpenAI({ apiKey: apiKey || process.env.FIREWORKS_API_KEY, baseURL: baseURL || 'https://api.fireworks.ai/inference/v1' })(modelNameString),
  3. Optionally, adjust the default structured output mode in the getDefaultMode function:

    if (providerId === 'fireworks') {
    return 'json'
    }
  4. Optionally, add a new logo under public/thirdparty/logos [blocked]

Contributing

As an open-source project, we welcome contributions from the community. If you are experiencing any bugs or want to add some improvements, please feel free to open an issue or pull request.

GitHub Repoe2b-dev/ai-artifacts
LicenseView License
Use Cases
AI
Stack
Next.js
Tailwind

Get Started

  • Templates
  • Supported frameworks
  • Marketplace
  • Domains

Build

  • Next.js on Vercel
  • Turborepo
  • v0

Scale

  • Content delivery network
  • Fluid compute
  • CI/CD
  • Observability
  • AI GatewayNew
  • Vercel AgentNew

Secure

  • Platform security
  • Web Application Firewall
  • Bot management
  • BotID
  • SandboxNew

Resources

  • Pricing
  • Customers
  • Enterprise
  • Articles
  • Startups
  • Solution partners

Learn

  • Docs
  • Blog
  • Changelog
  • Knowledge Base
  • Academy
  • Community

Frameworks

  • Next.js
  • Nuxt
  • Svelte
  • Nitro
  • Turbo

SDKs

  • AI SDK
  • Workflow SDKNew
  • Flags SDK
  • Chat SDK
  • Streamdown AINew

Use Cases

  • Composable commerce
  • Multi-tenant platforms
  • Web apps
  • Marketing sites
  • Platform engineers
  • Design engineers

Company

  • About
  • Careers
  • Help
  • Press
  • Legal
  • Privacy Policy

Community

  • Open source program
  • Events
  • Shipped on Vercel
  • GitHub
  • LinkedIn
  • X
  • YouTube

Loading status…

Select a display theme:
    • AI Cloud
      • AI Gateway

        One endpoint, all your models

      • Sandbox

        Isolated, safe code execution

      • Vercel Agent

        An agent that knows your stack

      • AI SDK

        The AI Toolkit for TypeScript

      • v0

        Build applications with AI

    • Core Platform
      • CI/CD

        Helping teams ship 6× faster

      • Content Delivery

        Fast, scalable, and reliable

      • Fluid Compute

        Servers, in serverless form

      • Workflow

        Long-running workflows at scale

      • Observability

        Trace every step

    • Security
      • Bot Management

        Scalable bot protection

      • BotID

        Invisible CAPTCHA

      • Platform Security

        DDoS Protection, Firewall

      • Web Application Firewall

        Granular, custom protection

    • Company
      • Customers

        Trusted by the best teams

      • Blog

        The latest posts and changes

      • Changelog

        See what shipped

      • Press

        Read the latest news

      • Events

        Join us at an event

    • Learn
      • Docs

        Vercel documentation

      • Academy

        Linear courses to level up

      • Knowledge Base

        Find help quickly

      • Community

        Join the conversation

    • Open Source
      • Next.js

        The native Next.js platform

      • Nuxt

        The progressive web framework

      • Svelte

        The web’s efficient UI framework

      • Turborepo

        Speed with Enterprise scale

    • Use Cases
      • AI Apps

        Deploy at the speed of AI

      • Composable Commerce

        Power storefronts that convert

      • Marketing Sites

        Launch campaigns fast

      • Multi-tenant Platforms

        Scale apps with one codebase

      • Web Apps

        Ship features, not infrastructure

    • Tools
      • Marketplace

        Extend and automate workflows

      • Templates

        Jumpstart app development

      • Partner Finder

        Get help from solution partners

    • Users
      • Platform Engineers

        Automate away repetition

      • Design Engineers

        Deploy for every idea

  • Enterprise
  • Pricing
Log InContact
Sign Up
Sign Up
Vercel April 2026 security incident
Read the bulletin
DeployView Demo
AI Gateway

One endpoint, all your models

Sandbox

Isolated, safe code execution

Vercel Agent

An agent that knows your stack

AI SDK

The AI Toolkit for TypeScript

v0

Build applications with AI

CI/CD

Helping teams ship 6× faster

Content Delivery

Fast, scalable, and reliable

Fluid Compute

Servers, in serverless form

Workflow

Long-running workflows at scale

Observability

Trace every step

Bot Management

Scalable bot protection

BotID

Invisible CAPTCHA

Platform Security

DDoS Protection, Firewall

Web Application Firewall

Granular, custom protection

Customers

Trusted by the best teams

Blog

The latest posts and changes

Changelog

See what shipped

Press

Read the latest news

Events

Join us at an event

Docs

Vercel documentation

Academy

Linear courses to level up

Knowledge Base

Find help quickly

Community

Join the conversation

Next.js

The native Next.js platform

Nuxt

The progressive web framework

Svelte

The web’s efficient UI framework

Turborepo

Speed with Enterprise scale

AI Apps

Deploy at the speed of AI

Composable Commerce

Power storefronts that convert

Marketing Sites

Launch campaigns fast

Multi-tenant Platforms

Scale apps with one codebase

Web Apps

Ship features, not infrastructure

Marketplace

Extend and automate workflows

Templates

Jumpstart app development

Partner Finder

Get help from solution partners

Platform Engineers

Automate away repetition

Design Engineers

Deploy for every idea