Vercel Logo

Transform vague AI into actionable responses with structured system prompts

Your bot responds to questions, but users ignore the answers. Why? Because "Based on my analysis, it appears that..." followed by three paragraphs of hedging isn't helpful. Users need specific answers and clear next steps, not essay responses. System prompts fix this—15 lines of prompt engineering transforms vague AI into an actionable assistant.

Outcome

Transform vague AI responses into structured, actionable messages that users actually follow.

Fast Track

  1. Find the system prompt in respond-to-message.ts
  2. Add structured format requirements
  3. Test and see immediate behavior change

Building on Previous Lessons

  • From Section 1.2 (Repository Flyover): You saw how getThreadContextAsModelMessage fetches conversation history
  • From Section 3: Interaction surfaces trigger your bot—now we make those responses intelligent
  • From correlation middleware: Correlation IDs help track which prompt version generated which response
Why Use the AI SDK Here?

You could call provider APIs directly, but the AI SDK gives you higher-level primitives that fit Slack agents: streamText for streaming replies into threads, tools for safe Slack-side operations, and structured outputs for things like bug triage or status summaries. It also lets you swap models or providers without rewriting every handler—your Slack code focuses on context and control flow, not on per-provider HTTP details.

Context Fetching Primer

Before you change how responses are shaped, remember what context you’re feeding into the model. Your bot already fetches conversation context (you saw this in Repository Flyover). Two key utilities exist in server/lib/slack/utils.ts:

Thread context (getThreadContextAsModelMessage):

  • Uses conversations.replies to fetch all messages in a thread
  • Transforms Slack messages → ModelMessage[] format for AI
  • Identifies bot vs user messages based on bot_id

Channel context (getChannelContextAsModelMessage):

  • Uses conversations.history to fetch recent channel messages
  • Excludes thread replies (top-level only)
  • Useful for broad channel awareness

Both return SlackUIMessage[] (extends ModelMessage with Slack metadata like ts, user, thread_ts). Your AI handlers pass this context to streamText or generateObject—that's how the bot "remembers" conversations.

Hands-On Exercise 4.1

Make your bot's responses actionable by enforcing structure:

Requirements:

  1. Modify the system prompt in /slack-agent/server/lib/ai/respond-to-message.ts
  2. Enforce this exact response format:
    • Direct answer (no preamble)
    • Two specific next steps
  3. Add correlation logging to error handlers (from lesson 2.2)
  4. Test the behavior change

Implementation hints:

  • The system prompt is just a template string - modify it directly
  • Be explicit about format - AIs follow literal instructions
  • Thread context.correlation through from listeners so errors are traceable
  • You don’t construct context.correlation here—middleware from Bolt Middleware already populates it for you
  • Token costs matter: prompt tokens cost on every request

Minimal change example:

system: `You are Slack Agent.
 
Always end responses with:
 
**Next steps:**
• [Specific action 1]
• [Specific action 2]`

Try It

Before (current behavior):

@bot how do I fix the build error?

I'd be happy to help you with the build error. To provide 
the most accurate assistance, could you share more details 
about the error message you're seeing?

After (with structured prompt):

@bot how do I fix the build error?

Check the error output in your terminal for the specific 
file and line number causing the issue.

**Next steps:**
• Run npm run lint to identify syntax errors
• Check package.json for version mismatches

Expected log output:

[INFO]  bolt-app {
  event_id: 'Ev09EKNCBGR5',
ts: '1734567890.123456',
channel: 'C09D4DG727P',
user: 'U09D6B53WP4'
} Processing app_mention
[DEBUG] Active tools: Set(2) { 'getChannelMessagesTool', 'updateAgentStatusTool' }

If an error occurs, you'll see correlation in error logs:

[ERROR] bolt-app {
  event_id: 'Ev09EKNCBGR5',
  ts: '1734567890.123456',
  thread_ts: undefined,
  error: 'Rate limit exceeded'
} AI stream creation failed

Notice the same event_id and ts appear in both success and error logs—that's correlation working.

Troubleshooting

Bot still gives vague responses:

  • Your prompt isn't explicit enough - be more directive
  • Check you're modifying the right system field
  • Restart the dev server after changes

Format is inconsistent:

  • Add "ALWAYS" or "MUST" to critical instructions
  • Provide an example in the prompt itself

Correlation fields missing from error logs:

  • Verify you're passing correlation: context.correlation when calling createTextStream
  • Check that listeners have context in their parameter destructuring
  • Ensure you're spreading ...correlation in the error logger call

Bot says "I need to check context" but doesn't:

  • Check logs for Active tools: Set(0) {}
  • This means no tools are available to the bot
  • See Advanced section for debugging hints

Commit

git add -A
git commit -m "feat(ai): enforce structured responses with actionable next steps"

Done-When

  • Responses always include "Next steps:" section
  • No more "I'd be happy to help" preambles
  • Bot gives direct answers immediately
  • Error logs include correlation fields (event_id, ts, thread_ts) for traceability

Solution

Step 1: Add context parameter to the interface

Update the RespondToMessageOptions interface to accept correlation context:

/slack-agent/server/lib/ai/respond-to-message.ts
interface RespondToMessageOptions {
  messages: ModelMessage[];
  event: KnownEventFromType<"message"> | KnownEventFromType<"app_mention">;
  channel?: string;
  thread_ts?: string;
  botId?: string;
  correlation?: {
    event_id?: string;
    ts?: string;
    thread_ts?: string;
  };
}

Step 2: Modify the system prompt and add correlation logging

Add the correlation parameter and update the error handler:

/slack-agent/server/lib/ai/respond-to-message.ts
export const createTextStream = async ({
  messages,
  event,
  channel,
  thread_ts,
  botId,
  correlation,  // ← Add this parameter
}: RespondToMessageOptions) => {
  try {
    const { textStream } = await streamText({
      model: "openai/gpt-4o-mini",
      system: `You are Slack Agent, a helpful assistant in Slack.
 
${
  "channel_type" in event && event.channel_type === "im"
    ? "You are in a direct message with the user."
    : "You are in a channel with multiple users."
}
 
RESPONSE RULES:
1. Give a direct answer immediately - no preambles like "I'd be happy to help"
2. Be concise but complete
3. ALWAYS end with exactly two next steps
 
Format your response like this:
[Direct answer to the question]
 
**Next steps:**
• [Specific action the user can take right now]
• [Alternative approach or follow-up action]
 
Always gather context from Slack before asking the user for clarification.
			`,
      messages,
      // ... rest of existing config unchanged
    });
    
    return textStream;
  } catch (error) {
    // ← Update error logging with correlation
    app.logger.error({
      ...correlation,
      error: error instanceof Error ? error.message : String(error),
    }, "AI stream creation failed");
    throw error;
  }
};

Keep the existing Core Rules section that already lives below this in respond-to-message.ts—you’re only inserting the RESPONSE RULES + format block at the top of the prompt, not replacing the rest.

Step 3: Pass context from listeners

Update app-mention.ts:

/slack-agent/server/listeners/events/app-mention.ts
const textStream = await createTextStream({
  messages,
  channel,
  thread_ts,
  botId: context.botId,
  event,
  correlation: context.correlation,  // ← Pass correlation through
});

Update direct-message.ts:

/slack-agent/server/listeners/messages/direct-message.ts
const textStream = await createTextStream({
  messages,
  channel: event.channel,
  thread_ts: event.thread_ts || event.ts,
  botId: context.botId,
  event,
  correlation: context.correlation,  // ← Pass correlation through
});

What changed:

  • Added correlation parameter to options interface
  • Updated error logging to include correlation fields (operation traceability)
  • Both listeners (app-mention and direct-message) now thread correlation context through
  • All errors from AI operations are now traceable back to specific Slack events

Advanced: Debug Context Tool Availability

Notice something odd? The bot says "I need to check context" but doesn't actually fetch it for @mentions. Time to investigate.

Debugging hints:

  1. Check the logs when you @mention the bot - what does Active tools: show?
  2. Look at /slack-agent/server/lib/ai/tools/index.ts - how does getActiveTools determine which tools are available?
  3. Compare event properties: What's different between message events and app_mention events?
  4. The bot thinks it's in a context where it can't access tools. Why?

Investigation questions:

  • Does an app_mention event have a channel_type property?
  • If not, how does getActiveTools know it's in a channel context?
  • What condition would make channel tools available for app_mentions?

Hint: The fix is small (~3 lines). Think about what you KNOW to be true when an app_mention event fires - the bot was mentioned in a channel, right? So it should have access to channel tools...