Wire Up the Sandbox
When building AI agents that execute code, you're essentially letting an LLM run arbitrary commands on your system. This introduces significant security risks:
- Unpredictable outputs: LLMs can hallucinate or generate malformed commands that behave unexpectedly
- Prompt injection: Malicious user input could trick the agent into running harmful commands like
rm -rf /or accessing sensitive files - Resource exhaustion: An infinite loop or memory-intensive command could crash your server
- Data exfiltration: Without isolation, the agent could read environment variables, credentials, or private data
A sandbox provides an isolated execution environment that protects your host system. With Vercel Sandbox:
- Commands run in a separate container with no access to your actual filesystem
- Network access can be restricted
- Resource limits prevent runaway processes
- Even if the LLM generates dangerous commands, they can't affect your production environment
This means you can safely give your agent powerful tools like bash execution without worrying about what happens if something goes wrong.
Outcome
Your agent has a live Sandbox and a connected bashTool. It can execute bash commands, though it has no files to explore yet.
Fast Track
- Import
Sandboxfrom@vercel/sandboxandcreateBashToolfrom./toolsinlib/agent.ts - Create a sandbox with
await Sandbox.create()and pass it tocreateBashTool - Add
bashToolto the agent'stoolsobject
Hands-on Exercise 2.1
Update lib/agent.ts to initialize the sandbox and wire up the bash tool.
Requirements:
- Import
Sandboxfrom@vercel/sandboxandcreateBashToolfrom./tools - Create a sandbox instance with
await Sandbox.create() - Pass the sandbox to
createBashTooland add the result to the agent'stools - Keep instructions as an empty string for now
Implementation hints:
Sandbox.create()is async, so use top-levelawait(supported in Next.js server modules)- The sandbox must be created before the agent definition since the agent needs the tool at construction time
- The key name in the
toolsobject (bashTool) is what the LLM sees, so keep it descriptive
Init the sandbox
In agent.ts, initialize the sandbox above the agent definition:
import { Sandbox } from '@vercel/sandbox';
const sandbox = await Sandbox.create();You can now pass this sandbox to the createBashTool function to create the tool in the agent:
import { createBashTool } from './tools';
export const agent = new ToolLoopAgent({
// ...
tools: {
bashTool: createBashTool(sandbox)
}
});The agent now has access to the bash tool, but there are no files mounted to the sandbox for the agent to explore. You need to load files into the sandbox before you run the agent.
Try It
-
Restart the dev server (
pnpm dev) and openhttp://localhost:3000. -
Ask the agent: "list the files in the current directory"
-
Watch the response. You should see the agent call
bashToolwith a command likels. The output shows the sandbox's root directory, which is mostly empty because you haven't loaded any files yet.[bashTool] $ lsThe output is sparse (maybe just system directories) because the sandbox starts clean. You'll load the call transcripts in the next lesson.
If the agent runs ls and shows minimal output, that's correct. The sandbox is a fresh container with no user files. The next lesson loads the call transcripts into it.
Commit
git add lib/agent.ts
git commit -m "feat(agent): init sandbox and connect bash tool"Done-When
lib/agent.tscreates a sandbox withSandbox.create()- The bash tool is wired up in the agent's
toolsobject - The agent can execute bash commands (visible in the chat UI as tool calls)
- The sandbox is empty (no call files loaded yet)
Solution
import { ToolLoopAgent } from 'ai';
import { createBashTool } from './tools';
import { Sandbox } from '@vercel/sandbox';
const MODEL = 'anthropic/claude-opus-4.6';
const sandbox = await Sandbox.create();
export const agent = new ToolLoopAgent({
model: MODEL,
instructions: '',
tools: {
bashTool: createBashTool(sandbox)
}
});Was this helpful?