Minimal FastAPI app that runs the OpenAI Agents SDK with Vercel Sandbox on Vercel's Python runtime. Each request spins up an isolated microVM, gives the agent shell access to analyze data, and tears it down when done.
npm i -g vercel).| Variable | Required | Description |
|---|---|---|
OPENAI_API_KEY | Yes | API key for the OpenAI provider. |
VERCEL_TOKEN | Yes | Vercel access token. Create one at https://vercel.com/account/tokens. |
VERCEL_TEAM_ID | Yes | Your Vercel team ID (starts with team_). Found under Team Settings > General. |
VERCEL_PROJECT_ID | Yes | Your Vercel project ID (starts with prj_). Found under Project Settings > General. |
OPENAI_DEFAULT_MODEL | No | Default model when the request body omits model. Falls back to gpt-4.1-mini. |
Copy the example env file and fill in your values:
cp .env.example .env.local
Then edit .env.local with your keys:
OPENAI_API_KEY=sk-...VERCEL_TOKEN=your_access_tokenVERCEL_TEAM_ID=team_xxxVERCEL_PROJECT_ID=prj_xxx
uv syncuv run uvicorn app:app --reload --host 127.0.0.1 --port 8000
Open http://127.0.0.1:8000 to use the interactive demo. The agent has shell access to a sandbox with sample sales data (sales.csv).
API endpoints:
GET /api/health returns {"status": "ok", "openai_configured": true}POST /api/run with {"input": "Which region grew the most?"} runs the sandbox agentvercel
Vercel detects app.py and the app ASGI instance. Dependencies come from pyproject.toml [blocked].
After deploying, make sure OPENAI_API_KEY, VERCEL_TOKEN, VERCEL_TEAM_ID, and VERCEL_PROJECT_ID are set under Project Settings > Environment Variables.
Sandbox creation and agent runs can take several seconds. Heavy workloads may need Fluid Compute or Vercel Workflow for durable steps.
POST /api/run creates a fresh Vercel Sandbox microVM with sample data.SandboxAgent with Shell capability receives the user's prompt.MIT (match your org's policy when publishing).