Vercel Logo

How a Slack mention becomes an AI reply

When you mention your bot, the event routes through Nitro → Bolt → your listeners → AI orchestration. If you don't know where to add logs or fix bugs, you waste time. This lesson maps the complete path so you can debug fast and extend with confidence.

Outcome

Understand where everything lives and trace a bot mention from HTTP entrypoint → AI reply.

Fast Track

  1. Trace the path: events.post.tsapp.tsapp-mention.tscreateTextStream → streaming response
  2. Run slack run and mention the bot in Slack
  3. Watch the streaming response in Slack UI and observe the verbose DEBUG logs in terminal

Read the Code

Key files for tracing the event flow:

  • README.md
  • manifest.json
  • server/api/slack/events.post.ts
  • server/app.ts
  • server/listeners/*
  • server/lib/ai/*
  • scripts/*

How a Slack message travels

┌─────────────────────────────────────────────────────────────────┐
│                    Slack Event Flow                            │
└─────────────────────────────────────────────────────────────────┘

User sends message in Slack
        ↓
┌─────────────────┐    HTTP POST    ┌─────────────────┐
│     Slack       │ ──────────────→ │  events.post.ts │
│   Platform      │                 │  (Nitro route)  │
└─────────────────┘                 └─────────────────┘
                                            ↓ toWebRequest()
                                    ┌─────────────────┐
                                    │ VercelReceiver  │
                                    │   (Bolt)        │
                                    └─────────────────┘
                                            ↓ route event
                                    ┌─────────────────┐
                                    │ Event Listeners │
                                    │  (app-mention,  │
                                    │ direct-message) │
                                    └─────────────────┘
                                            ↓ fetch context
                                    ┌─────────────────┐
                                    │ createTextStream│
                                    │ (AI + tools)    │
                                    └─────────────────┘
                                            ↓ streamText()
                                    ┌─────────────────┐
                                    │client.chatStream│
                                    │ (Slack streaming)│
                                    └─────────────────┘
                                            ↓ for await...append
                                    ┌─────────────────┐
                                    │ Streamed Reply  │
                                    │ + feedback block│
                                    └─────────────────┘

Try It

  1. Watch the streaming flow in action:

    slack run

    Keep the process alive. Mention the bot in Slack: @bot what's up?

    In Slack: Watch the response appear word-by-word as chunks stream in. This is client.chatStream() delivering real-time updates.

    In your terminal: You'll see verbose DEBUG logs like this:

    [DEBUG]  bolt-app app_mention event received: {"type":"app_mention","user":"U09TJB25XQT",...}
    [DEBUG]  web-api:WebClient:0 apiCall('assistant.threads.setStatus') start
    [DEBUG]  web-api:WebClient:0 apiCall('conversations.replies') start
    [DEBUG]  bolt-app Active tools: Set(0) {}
    [DEBUG]  web-api:WebClient:1 ChatStreamer appended to buffer: {"bufferLength":2,...}
    [DEBUG]  web-api:WebClient:1 ChatStreamer appended to buffer: {"bufferLength":8,...}
    [DEBUG]  web-api:WebClient:1 apiCall('chat.startStream') start
    [DEBUG]  web-api:WebClient:1 apiCall('chat.stopStream') start
    
  2. Observe the flow (even if it's noisy):

    • The first log shows the app_mention event with its full payload
    • assistant.threads.setStatus is the "is typing..." indicator
    • conversations.replies fetches thread context
    • ChatStreamer appended to buffer lines show text chunks accumulating
    • chat.startStream sends the first chunk to Slack
    • chat.stopStream completes the message with feedback blocks

Commit

git add -A
git commit -m "docs(architecture): understand event flow and streaming architecture
 
- Trace Slack events from HTTP entry to streaming AI response
- Observe verbose DEBUG logs and identify pain points
- Map file responsibilities across the codebase"

Done-When

  • Can trace an event from events.post.tsapp.ts → listener → createTextStream → streaming response
  • Observed streaming behavior in both Slack UI and terminal logs
  • Understand the role of each major directory and file
  • Recognize that logging needs improvement (sets up SideQuest motivation)