Skip to content

The Complete Guide to Chat SDK

Chat SDK is a TypeScript library for building chat bots that work across Slack, Teams, Discord, Linear, and more from a single codebase. Learn how it works.

7 min read
Last updated April 29, 2026

Chat SDK is the universal chat layer for building bots and agents.

With this open-source TypeScript SDK, you can build chat bots that work across multiple platforms from a single codebase. You write your bot logic once and deploy it to Slack, Microsoft Teams, Google Chat, Discord, Telegram, GitHub, Linear, and WhatsApp, and other platforms (e.g., Resend).

In this guide, you'll learn:

  • What Chat SDK is and the problem it solves
  • The three core concepts: Chat, adapters, and state
  • How to build your first bot in a few lines of code
  • How to handle messages, streaming AI responses, and interactive UI
  • How to manage concurrency when messages arrive faster than you can process
  • How to choose the right deployment pattern for production

Building a bot that works across multiple chat platforms usually means maintaining separate codebases, learning different APIs, and handling platform-specific quirks individually. Chat SDK hides those differences behind a unified interface, type-safe adapters, and an event-driven architecture.

A single handler written against Chat SDK fires for mentions on any connected platform:

bot.ts
bot.onNewMention(async (thread) => {
await thread.subscribe();
await thread.post("Hello! I'm listening to this thread.");
});

The same code runs whether the mention came from Slack, Teams, Discord, or Linear. The adapter for each platform handles webhook verification, message parsing, and API calls.

Chat SDK has three core concepts:

  • Chat is the main entry point. It coordinates adapters and routes events to your handlers.
  • Adapters are platform-specific implementations. Each one handles webhook parsing, message formatting, and API calls for a single platform.
  • State is a pluggable persistence layer for thread subscriptions and distributed locking.

When a webhook arrives, the Chat class identifies which adapter should handle it, parses the event into a platform-neutral shape, and dispatches it to your registered handlers. Any response you post goes back through the adapter, which converts your message into the platform's native format.

bot.ts
import { Chat } from "chat";
import { createSlackAdapter } from "@chat-adapter/slack";
import { createRedisState } from "@chat-adapter/state-redis";
const bot = new Chat({
userName: "mybot",
adapters: {
slack: createSlackAdapter(),
},
state: createRedisState(),
});
bot.onNewMention(async (thread) => {
await thread.subscribe();
await thread.post("Hello! I'm listening to this thread.");
});

Each adapter factory auto-detects credentials from environment variables such as SLACK_BOT_TOKEN, SLACK_SIGNING_SECRET, and REDIS_URL, so you can run a bot with zero explicit config in most setups.

Adapters published under @chat-adapter/* and maintained by Vercel cover the most common platforms:

PlatformPackageNotable capabilities
Slack@chat-adapter/slackThreads, reactions, interactive cards, modals, native streaming, Assistants API
Microsoft Teams@chat-adapter/teamsAdaptive cards, mentions, conversation threading
Google Chat@chat-adapter/gchatSpaces, threads, Workspace Events via Pub/Sub
Discord@chat-adapter/discordSlash commands, threads, rich embeds
GitHub@chat-adapter/githubPull request and issue comment threads
Linear@chat-adapter/linearIssue comment threads and app-actor agent sessions
Telegram, WhatsApp@chat-adapter/*Channel-scoped conversations, groups, inline keyboards

Community adapters extend this list to Matrix, Mattermost, Webex, Zalo, email (via Resend), and others. Each adapter page documents its authentication options, supported features, and known limitations. For example, the Linear adapter distinguishes between mode: "comments" for issue comment webhooks and mode: "agent-sessions" for Linear app-actor installs.

To target more than one platform, register multiple adapters in the same Chat instance:

bot.ts
const bot = new Chat({
userName: "mybot",
adapters: {
slack: createSlackAdapter(),
teams: createTeamsAdapter(),
gchat: createGoogleChatAdapter(),
},
state: createRedisState(),
});

The same onNewMention handler now fires for mentions on all three.

thread.post() accepts several message formats. Pick the one that matches the content you're sending:

  • Plain string for short replies. The text goes through as-is.
  • Markdown with { markdown: "**Bold** text" }. The SDK parses the markdown into an mdast AST, then each adapter converts it to the platform's format (mrkdwn for Slack, HTML for Teams, and so on).
  • AST built with exported helpers like root, paragraph, text, and link. This gives you programmatic control without the overhead of cards.
  • Cards for interactive UI with buttons, dropdowns, and structured layouts.
  • Streams for real-time AI responses.
bot.ts
// Plain text
await thread.post("Hello!");
// Markdown
await thread.post({ markdown: "**Deployment complete**" });
// AST
import { root, paragraph, text, strong, link } from "chat";
await thread.post({
ast: root([
paragraph([
strong([text("Deployment complete")]),
text(" — "),
link("https://example.com", [text("View site")]),
]),
]),
});

For most cases, the AST builders give the best balance of control and simplicity.

Chat SDK accepts any AsyncIterable<string> as a message. You can pass an AI SDK stream directly to thread.post():

bot.ts
import { ToolLoopAgent } from "ai";
const agent = new ToolLoopAgent({
model,
instructions: "You are a helpful assistant.",
});
const result = await agent.stream({ prompt: message.text });
await thread.post(result.fullStream);

The SDK uses platform-native streaming where available (Slack) and falls back to post-then-edit on other platforms. Use fullStream with multi-step agents because it preserves paragraph breaks between steps. textStream concatenates text across tool calls, which can produce run-on output.

For multi-turn conversations, use toAiMessages() to convert thread history into the { role, content }[] format that AI SDKs expect:

bot.ts
import { toAiMessages } from "chat";
bot.onSubscribedMessage(async (thread, message) => {
const result = await thread.adapter.fetchMessages(thread.id, { limit: 20 });
const history = await toAiMessages(result.messages);
const response = await agent.stream({ prompt: history });
await thread.post(response.fullStream);
});

The SDK also buffers potential GFM tables during streaming so they don't flash as raw pipe-delimited text before the structure is complete.

For buttons, dropdowns, and structured layouts, use cards. Register handlers with onAction to respond when a user clicks a button:

bot.ts
bot.onAction("approve", async (event) => {
await event.thread.post(`Order approved by ${event.user.fullName}!`);
});

Modals open form dialogs in response to button clicks or slash commands. They support text inputs, dropdowns, radio buttons, and server-side validation. Modals are currently supported on Slack:

bot.ts
import { Modal, TextInput, Select, SelectOption } from "chat";
bot.onAction("feedback", async (event) => {
await event.openModal(
<Modal callbackId="feedback_form" title="Send Feedback" submitLabel="Send">
<TextInput id="message" label="Your Feedback" multiline />
<Select id="category" label="Category">
<SelectOption label="Bug" value="bug" />
<SelectOption label="Feature" value="feature" />
</Select>
</Modal>
);
});

JSX card syntax requires jsxImportSource: "chat" in your tsconfig.json. If the types don't resolve, use the function-call syntax (Modal({...})) instead, which produces the same output.

In production, a single user can send messages faster than your handler runs, especially on platforms like WhatsApp and Telegram where short, rapid messages are normal. When multiple messages arrive on the same thread while a handler is still processing, the SDK needs a strategy. Chat SDK offers four:

StrategyBehaviorWhen to use
drop (default)Discard new messages while a handler is running and throw a LockErrorBots where losing rapid-fire duplicates is acceptable
queueEnqueue incoming messages, then process only the latest when the current handler finishes; pass intermediate messages as context.skippedYou want to acknowledge every message but respond once
debounceWait for a pause in the conversation, then process only the final messageWhatsApp, Telegram, or any chat where users send bursts of short messages
concurrentNo locking. Every message runs in its own handler invocationStateless handlers where thread ordering doesn't matter

Configure the strategy on the Chat instance:

bot.ts
const bot = new Chat({
concurrency: "queue",
lockScope: ({ isDM, adapter }) =>
isDM ? "channel" : "thread",
// ...
});
bot.onNewMention(async (thread, message, context) => {
if (context && context.skipped.length > 0) {
await thread.post(
`You sent ${context.totalSinceLastHandler} messages while I was working. Responding to your latest.`
);
}
const response = await generateAIResponse(message.text);
await thread.post(response);
});

By default, locks are scoped to the thread. WhatsApp and Telegram adapters default to lockScope: "channel" because conversations happen at the channel level rather than in threads.

The state adapter handles two things: thread subscriptions (so onSubscribedMessage keeps firing after the initial mention) and distributed locking (so two serverless instances don't process the same webhook twice).

Available state adapters include:

  • @chat-adapter/state-memory for local development and tests
  • @chat-adapter/state-redis for production on standard Redis
  • @chat-adapter/state-ioredis for Redis Cluster or Sentinel deployments
  • A PostgreSQL state adapter for teams already running Postgres

Each thread also exposes typed, per-thread state with a 30-day TTL, which is useful for per-conversation preferences or in-flight workflow context:

bot.ts
// Read state
const state = await thread.state;
// Merge into existing state
await thread.setState({ aiMode: true });
// Replace state entirely
await thread.setState({ aiMode: false }, { replace: true });

For local development, the memory state adapter works fine. For anything deployed to serverless infrastructure, use Redis or Postgres so subscriptions survive cold starts and multiple instances don't race each other.

Chat SDK is the right choice when:

  • You need the same bot behavior across multiple chat platforms
  • You want type-safe event handlers instead of hand-rolled webhook parsers
  • You're integrating AI responses and want streaming to work on every platform
  • You're deploying to serverless and need distributed locking and message deduplication

It's probably the wrong choice when:

  • You only target one platform and are already comfortable with its SDK
  • Your bot is purely transactional with no threading, state, or streaming needs
  • You need a feature that isn't yet supported by the adapter for your target platform (check the adapter's page for partial-support indicators)

A typical production deployment pairs Chat SDK with a serverless framework and a Redis-backed state adapter:

  1. Create a webhook route per platform. Each adapter exposes a handler via bot.webhooks.slack, bot.webhooks.teams, and so on, which you wire into your framework's routing (Next.js route handlers, Hono, Nuxt server routes, etc.).
  2. Provision Redis for state. Any Redis-compatible store works. Set REDIS_URL and the Redis state adapter auto-detects it.
  3. Configure platform credentials as environment variables. Adapter factories pick them up automatically.
  4. Register your webhook URLs with each platform (for example, the request URL in your Slack app manifest, the messaging endpoint in Azure Bot Service for Teams).
  5. Handle concurrency explicitly. The default drop strategy is fine for low-traffic bots, but use queue or debounce if your users send messages in bursts.

Was this helpful?

supported.