Skip to content
  • Custom Class Serialization in Workflow SDK

    Workflow SDK now supports custom class serialization, letting you pass your own class instances between workflow and step functions.

    Workflow SDK serializes standard JavaScript types like primitives, objects, arrays, Date, Map, Set, and more. Custom class instances were previously not supported because the serialization system didn't know how to reconstruct them. With the new @workflow/serde package, you can define how your classes are serialized and deserialized by implementing two static methods using WORKFLOW_SERIALIZE and WORKFLOW_DESERIALIZE.

    Here's an example of how we used custom serialization in @vercel/sandbox to greatly improve DX:

    import { WORKFLOW_SERIALIZE, WORKFLOW_DESERIALIZE } from "@workflow/serde";
    interface SerializedSandbox {
    metadata: SandboxSnapshot;
    routes: SandboxRouteData[];
    }
    export class Sandbox {
    sandbox: SandboxSnapshot;
    routes: SandboxRouteData[];
    // Serialize a Sandbox instance to plain data
    static [WORKFLOW_SERIALIZE](instance: Sandbox): SerializedSandbox {
    return {
    metadata: instance.sandbox,
    routes: instance.routes,
    };
    }
    // Deserialize a Sandbox from serialized data
    static [WORKFLOW_DESERIALIZE](data: SerializedSandbox): Sandbox {
    return new Sandbox({
    sandbox: data.metadata,
    routes: data.routes,
    });
    }
    // If instance methods require Node.js APIs, they are marked with
    // "use step" too allow them to be called from the workflow
    async runCommand(
    commandOrParams: string | RunCommandParams,
    args?: string[],
    opts?: { signal?: AbortSignal },
    ): Promise<Command | CommandFinished> {
    "use step";
    // ...
    }
    }

    Example of how `@vercel/sandbox` implements workflow custom class serialization

    Once implemented, instances of your class can be passed as arguments and return values between workflow and step functions, with the serialization system handling conversion automatically.

    workflow/code-runner.ts
    export async function runCode(prompt: string) {
    "use workflow";
    // Sandbox.create() has "use step" built in, so it runs as a
    // durable step. The returned Sandbox instance is automatically
    // serialized via WORKFLOW_SERIALIZE when it crosses the step boundary.
    const sandbox = await Sandbox.create({
    resources: { vcpus: 1 },
    timeout: 5 * 60 * 1000,
    runtime: "node22",
    });
    // Each Sandbox instance method (writeFiles, runCommand, stop, etc.)
    // also has "use step" built in, so every call below is its own
    // durable step — and the sandbox object is automatically rehydrated
    // via WORKFLOW_DESERIALIZE at each step boundary.
    const code = 'console.log("Hello World")';
    await sandbox.writeFiles([{ path: "script.js", content: code }]);
    const finished = await sandbox.runCommand("node", ["script.js"]);
    // ...
    }

    Example usage of the serialized `Sandbox` class within Workflow DevKit

    Nathan Rajlich, John Lindquist

  • Qwen 3.6 Plus on AI Gateway

    Qwen 3.6 Plus from Alibaba is now available on Vercel AI Gateway.

    Compared to Qwen 3.5 Plus, this model adds stronger agentic coding capabilities, from frontend development to repository-level problem solving, along with improved multimodal perception and reasoning. It features a 1M context window and improved performance on tool-calling, long-horizon planning, and multilingual tasks.

    To use Qwen 3.6 Plus, set model to qwen/qwen3.6-plus in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'alibaba/qwen3.6-plus',
    prompt:
    `Refactor this module to separate concerns, update
    the imports across the repo, and verify nothing breaks
    with the existing test suite.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Gemma 4 on AI Gateway

    Gemma 4 26B (MoE) and 31B (Dense) from Google are now available on Vercel AI Gateway.

    Built on the same architecture as Gemini 3, both open models support function-calling, agentic workflows, structured JSON output, and system instructions. Both support up to 256K context, 140+ languages, and native vision.

    • 26B (MoE): Activates only 3.8B of its 26B total parameters during inference, optimized for lower latency and faster tokens-per-second.

    • 31B (Dense): All parameters are active during inference, targeting higher output quality. Better suited as a foundation for fine-tuning.

    To use Gemma 4, set model to google/gemma-4-31b-it or google/gemma-4-26b-a4b-it in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'google/gemma-4-26b-a4b-it',
    // or 'google/gemma-4-31b-it'
    prompt:
    `Break down this codebase into modules, identify circular
    dependencies, and generate a refactoring plan with
    implementation steps.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Zero-configuration Go backend support

    Vercel Go DarkVercel Go Dark

    Go API backends can now be deployed on Vercel with zero-configuration deployment.

    main.go
    package main
    import (
    "fmt"
    "net/http"
    "os"
    )
    func main() {
    port := os.Getenv("PORT")
    if port == "" {
    port = "3000"
    }
    http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Hello World")
    })
    addr := ":" + port
    fmt.Printf("Listening on %s\n", addr)
    http.ListenAndServe(addr, nil)
    }

    Vercel now recognizes Go servers as first-class backends and automatically provisions the right resources and configures your application without redirects in vercel.json or the /api folder convention.

    Backends on Vercel use Fluid compute with Active CPU pricing by default. Your Go API scales automatically with traffic, and you pay only for active CPU time rather than idle capacity.

  • Chat SDK adds Zernio support

    Chat SDK now supports Zernio, a unified social media API, with the new Zernio adapter. This is an official vendor adapter built and maintained by the Zernio team.

    Teams can build bots that work across Instagram, Facebook, Telegram, WhatsApp, X/Twitter, Bluesky, and Reddit through a single integration.

    Try the Zernio adapter today:

    import { Chat } from "chat";
    import { createZernioAdapter } from "@zernio/chat-sdk-adapter";
    const bot = new Chat({
    adapters: {
    zernio: createZernioAdapter(),
    },
    });
    bot.onNewMention(async (thread, message) => {
    const platform = message.raw.platform;
    await thread.post(`Hello from ${platform}!`);
    });

    Feature support varies by platform; rich cards work on Facebook, Instagram, Telegram, and WhatsApp, while editing and streaming are currently limited to Telegram.

    Read the documentation to get started, browse the directory, or build your own adapter.

  • GLM 5V Turbo on AI Gateway

    GLM 5V Turbo from Z.ai is now available on Vercel AI Gateway.

    GLM 5V Turbo is a multimodal coding model that turns screenshots and designs into code, debugs visually, and operates GUIs autonomously. It's strong at design-to-code generation, visual code generation, and navigating real GUI environments, at a smaller parameter size than comparable models.

    To use GLM 5V Turbo, set model to zai/glm-5v-turbo in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'zai/glm-5v-turbo',
    prompt:
    `Recreate this screenshot as a responsive React component
    with Tailwind CSS and match the layout exactly.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in custom reporting, observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Transfer Marketplace resources between teams

    Transfer darkTransfer dark

    You can now transfer Marketplace resources between teams directly from the Vercel dashboard without relying on the API. This simplifies resource management during team or project changes. Both owner and member roles on the source and destination teams can initiate transfers.

    The destination team must have the corresponding integration installed before receiving a resource. The feature currently supports transfer databases from Prisma, Neon and Supabase , with additional providers and product support coming soon.

    Start from your database settings in the dashboard, or learn more in the documentation.

    Tony Pan, Hedi Zandi

  • Axios package compromise and remediation steps

    The axios npm package was compromised in an active supply chain attack discovered on March 31, 2026. Vercel investigated this issue and implemented remediation actions to protect the platform. No Vercel systems were affected.

    The npm registry removed the compromised package versions, and the latest tag now points to the safe axios@1.14.0 release.

    • We’ve blocked outgoing access from our build infrastructure to the Command & Control hostname sfrclak.com.

    • The malicious version of the package has been blocked and unpublished from npm.

    • Vercel’s own infrastructure and applications have been unaffected. We recommend checking your supply chain for exposure.

    Link to headingAffected versions

    Projects using axios@1.14.1 or axios@0.30.4 in their build environments are affected by this vulnerability.

    Link to headingResolution

    If your deployments used the malicious package version listed above in your build environment, take the following actions:

    • Search your lockfiles and node_modules for plain-crypto-js to identify compromised installations

    • Redeploy your project to ensure your build uses a clean version of axios

    • Rotate API keys, database credentials, tokens, and any other sensitive values present in your build environment

    • Review your dependency tree for references to axios@1.14.1 or axios@0.30.4 and update them to axios@1.14.0