An open-source, in-browser, AI-native IDE built with Next.js, FastAPI, OpenAI Agents SDK, and the Vercel AI Cloud. The platform combines real-time AI interactions with safe sandboxed execution environments and framework-defined infrastructure.
First, run the backend development server:
cd backendvercel linkvercel env pull# or manually set env vars# cat .env.example > .envpython -m venv .venvsource .venv/bin/activatepip install -r requirements.txtpython server.py
Open http://localhost:8081/docs with your browser to see the backend.
Then, run the frontend development server:
# in a separate terminalcd frontendnpm inpm run dev
Open http://localhost:3000 with your browser to see the backend.
The frontend is built with Next.js and the Monaco Editor. All API calls are made to our FastAPI backend which handles agent and sandbox execution.
The backend runs as a decoupled FastAPI app which runs as a function on Fluid Compute, optimized for prompt-based workloads. Since LLMs often idle while reasoning, Fluid compute reallocates unused cycles to serve other requests or reduce cost.
The agent runs on the OpenAI Agents SDK which routes LLM requests through the AI Gateway to support multiple models.
The agent has the ability to acquire ephemeral sandboxes to run code. Each sandbox is a safe isolated environment that expires after a short timeout. Sandboxes do not have access to any code outside of their respective project in the editor, making them safe to run arbitrary code.
Agents can use these sandboxes to install dependencies, run commands, make edits, and execute the code.
The agent streams real-time updates back to the frontend so users can see the agent's progress instantly.