Chat SDK is the universal chat layer for building bots and agents.
With this open-source TypeScript SDK, you can build chat bots that work across multiple platforms from a single codebase. You write your bot logic once and deploy it to Slack, Microsoft Teams, Google Chat, Discord, Telegram, GitHub, Linear, and WhatsApp, and other platforms (e.g., Resend).
In this guide, you'll learn:
- What Chat SDK is and the problem it solves
- The three core concepts:
Chat, adapters, and state - How to build your first bot in a few lines of code
- How to handle messages, streaming AI responses, and interactive UI
- How to manage concurrency when messages arrive faster than you can process
- How to choose the right deployment pattern for production
Building a bot that works across multiple chat platforms usually means maintaining separate codebases, learning different APIs, and handling platform-specific quirks individually. Chat SDK hides those differences behind a unified interface, type-safe adapters, and an event-driven architecture.
A single handler written against Chat SDK fires for mentions on any connected platform:
The same code runs whether the mention came from Slack, Teams, Discord, or Linear. The adapter for each platform handles webhook verification, message parsing, and API calls.
Chat SDK has three core concepts:
- Chat is the main entry point. It coordinates adapters and routes events to your handlers.
- Adapters are platform-specific implementations. Each one handles webhook parsing, message formatting, and API calls for a single platform.
- State is a pluggable persistence layer for thread subscriptions and distributed locking.
When a webhook arrives, the Chat class identifies which adapter should handle it, parses the event into a platform-neutral shape, and dispatches it to your registered handlers. Any response you post goes back through the adapter, which converts your message into the platform's native format.
Each adapter factory auto-detects credentials from environment variables such as SLACK_BOT_TOKEN, SLACK_SIGNING_SECRET, and REDIS_URL, so you can run a bot with zero explicit config in most setups.
Adapters published under @chat-adapter/* and maintained by Vercel cover the most common platforms:
| Platform | Package | Notable capabilities |
|---|---|---|
| Slack | @chat-adapter/slack | Threads, reactions, interactive cards, modals, native streaming, Assistants API |
| Microsoft Teams | @chat-adapter/teams | Adaptive cards, mentions, conversation threading |
| Google Chat | @chat-adapter/gchat | Spaces, threads, Workspace Events via Pub/Sub |
| Discord | @chat-adapter/discord | Slash commands, threads, rich embeds |
| GitHub | @chat-adapter/github | Pull request and issue comment threads |
| Linear | @chat-adapter/linear | Issue comment threads and app-actor agent sessions |
| Telegram, WhatsApp | @chat-adapter/* | Channel-scoped conversations, groups, inline keyboards |
Community adapters extend this list to Matrix, Mattermost, Webex, Zalo, email (via Resend), and others. Each adapter page documents its authentication options, supported features, and known limitations. For example, the Linear adapter distinguishes between mode: "comments" for issue comment webhooks and mode: "agent-sessions" for Linear app-actor installs.
To target more than one platform, register multiple adapters in the same Chat instance:
The same onNewMention handler now fires for mentions on all three.
thread.post() accepts several message formats. Pick the one that matches the content you're sending:
- Plain string for short replies. The text goes through as-is.
- Markdown with
{ markdown: "**Bold** text" }. The SDK parses the markdown into an mdast AST, then each adapter converts it to the platform's format (mrkdwn for Slack, HTML for Teams, and so on). - AST built with exported helpers like
root,paragraph,text, andlink. This gives you programmatic control without the overhead of cards. - Cards for interactive UI with buttons, dropdowns, and structured layouts.
- Streams for real-time AI responses.
For most cases, the AST builders give the best balance of control and simplicity.
Chat SDK accepts any AsyncIterable<string> as a message. You can pass an AI SDK stream directly to thread.post():
The SDK uses platform-native streaming where available (Slack) and falls back to post-then-edit on other platforms. Use fullStream with multi-step agents because it preserves paragraph breaks between steps. textStream concatenates text across tool calls, which can produce run-on output.
For multi-turn conversations, use toAiMessages() to convert thread history into the { role, content }[] format that AI SDKs expect:
The SDK also buffers potential GFM tables during streaming so they don't flash as raw pipe-delimited text before the structure is complete.
For buttons, dropdowns, and structured layouts, use cards. Register handlers with onAction to respond when a user clicks a button:
Modals open form dialogs in response to button clicks or slash commands. They support text inputs, dropdowns, radio buttons, and server-side validation. Modals are currently supported on Slack:
JSX card syntax requires jsxImportSource: "chat" in your tsconfig.json. If the types don't resolve, use the function-call syntax (Modal({...})) instead, which produces the same output.
In production, a single user can send messages faster than your handler runs, especially on platforms like WhatsApp and Telegram where short, rapid messages are normal. When multiple messages arrive on the same thread while a handler is still processing, the SDK needs a strategy. Chat SDK offers four:
| Strategy | Behavior | When to use |
|---|---|---|
drop (default) | Discard new messages while a handler is running and throw a LockError | Bots where losing rapid-fire duplicates is acceptable |
queue | Enqueue incoming messages, then process only the latest when the current handler finishes; pass intermediate messages as context.skipped | You want to acknowledge every message but respond once |
debounce | Wait for a pause in the conversation, then process only the final message | WhatsApp, Telegram, or any chat where users send bursts of short messages |
concurrent | No locking. Every message runs in its own handler invocation | Stateless handlers where thread ordering doesn't matter |
Configure the strategy on the Chat instance:
By default, locks are scoped to the thread. WhatsApp and Telegram adapters default to lockScope: "channel" because conversations happen at the channel level rather than in threads.
The state adapter handles two things: thread subscriptions (so onSubscribedMessage keeps firing after the initial mention) and distributed locking (so two serverless instances don't process the same webhook twice).
Available state adapters include:
@chat-adapter/state-memoryfor local development and tests@chat-adapter/state-redisfor production on standard Redis@chat-adapter/state-ioredisfor Redis Cluster or Sentinel deployments- A PostgreSQL state adapter for teams already running Postgres
Each thread also exposes typed, per-thread state with a 30-day TTL, which is useful for per-conversation preferences or in-flight workflow context:
For local development, the memory state adapter works fine. For anything deployed to serverless infrastructure, use Redis or Postgres so subscriptions survive cold starts and multiple instances don't race each other.
Chat SDK is the right choice when:
- You need the same bot behavior across multiple chat platforms
- You want type-safe event handlers instead of hand-rolled webhook parsers
- You're integrating AI responses and want streaming to work on every platform
- You're deploying to serverless and need distributed locking and message deduplication
It's probably the wrong choice when:
- You only target one platform and are already comfortable with its SDK
- Your bot is purely transactional with no threading, state, or streaming needs
- You need a feature that isn't yet supported by the adapter for your target platform (check the adapter's page for partial-support indicators)
A typical production deployment pairs Chat SDK with a serverless framework and a Redis-backed state adapter:
- Create a webhook route per platform. Each adapter exposes a handler via
bot.webhooks.slack,bot.webhooks.teams, and so on, which you wire into your framework's routing (Next.js route handlers, Hono, Nuxt server routes, etc.). - Provision Redis for state. Any Redis-compatible store works. Set
REDIS_URLand the Redis state adapter auto-detects it. - Configure platform credentials as environment variables. Adapter factories pick them up automatically.
- Register your webhook URLs with each platform (for example, the request URL in your Slack app manifest, the messaging endpoint in Azure Bot Service for Teams).
- Handle concurrency explicitly. The default
dropstrategy is fine for low-traffic bots, but usequeueordebounceif your users send messages in bursts.