This guide walks through building an agent that analyzes data inside an isolated Vercel Sandbox microVM using the OpenAI Agents SDK. The agent gets shell access to run commands, read files, and return insights, all inside an ephemeral Firecracker microVM with no access to your host filesystem, credentials, or network.
- Define: create a
SandboxAgentwith the built-inShellcapability - Sandbox: spin up an isolated Vercel Sandbox microVM with workspace files
- Run: the agent plans and executes shell commands, with all side effects contained in the sandbox
Your orchestration logic (model calls, tool routing) runs locally or on your server. Only the shell commands execute inside the microVM.
- Python 3.11+
- The Vercel CLI installed (
npm i -g vercel) - A Vercel account with Sandbox access
- An OpenAI API key
Create a working directory, link a Vercel project, and pull credentials:
When prompted, select Create a new project. vercel env pull creates a .env.local file containing a VERCEL_OIDC_TOKEN that the Vercel SDK reads automatically.
Add your OpenAI API key to the same file:
Install the Python dependencies:
The vercel extra pulls in the Vercel Python SDK automatically. python-dotenv loads the .env.local credentials into the environment.
The SandboxAgent class combines a standard agent with sandbox capabilities. The built-in Shell capability gives your agent a tool to run commands inside the sandbox.
Create an agent.py file and load credentials:
Add the imports at the top of the file:
Define a manifest with sample data and create a SandboxAgent with the Shell capability. Setting tool_choice="required" ensures the agent runs commands rather than guessing:
Create a sandbox session and run the agent. The SandboxRunConfig binds the agent to the session so tool calls execute inside the microVM:
The agent receives a shell tool and automatically gets instructions about the workspace layout. It can run commands like cat sales.csv or awk one-liners to answer questions.
VercelSandboxClientOptions controls how the microVM is provisioned:
- timeout_ms: how long the sandbox stays alive (default 270s)
- runtime: the sandbox runtime, for example
python3.12ornode22 - resources: compute resources like
{"vcpus": 2}(2 vCPUs = 4GB RAM) - env: environment variables injected into the sandbox
- exposed_ports: ports forwarded from the sandbox to a public HTTPS endpoint
- workspace_persistence:
"tar"(default) archives the workspace as a tarball,"snapshot"uses Vercel's native snapshot API for faster restore
Use Runner.run_streamed to see the agent's output as it works, rather than waiting for the full response:
- Isolation: each agent runs in its own Firecracker microVM with no access to your host machine
- Egress control: restrict outbound traffic to specific domains using network policies, preventing data exfiltration
- Fast boot: sandboxes start in milliseconds, so you can create one per request
- Snapshots: cache expensive setup (package installs, repo clones) and skip it on subsequent runs
- Workspace files: seed the sandbox with data via
Manifestbefore the agent runs - Built-in capabilities:
Shellgives the agent command execution without manual tool wiring
- Deploy the OpenAI Agents SDK template to get a working sandbox agent on Vercel in one click
- Vercel Sandbox docs: runtime options, networking policies, exposed ports, persistence, and resource limits
- Vercel Python SDK: lower-level sandbox control, blob storage, OIDC tokens, and runtime cache
- OpenAI Agents SDK docs: multi-agent workflows, tools, guardrails, streaming, and tracing