You can now access GLM-5 via AI Gateway with no other provider accounts required.
GLM-5 from Z.AI is now available on AI Gateway. Compared to GLM-4.7, GLM-5 adds multiple thinking modes, improved long-range planning and memory, and better handling of complex multi-step agent tasks. It's particularly strong at agentic coding, autonomous tool use, and extracting structured data from documents like contracts and financial reports.
To use this model, set model to zai/glm-5 in the AI SDK:
import{ streamText }from'ai';
const result =streamText({
model:'zai/glm-5',
prompt:
`Generate a complete REST API with authentication,
database models, and test coverage for a task management app.`,
});
AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.
Vercel Sandbox can now enforce egress network policies through Server Name Indication (SNI) filtering and CIDR blocks, giving you control over which hosts a sandbox can reach. Outbound TLS connections are matched against your policy at the handshake, unauthorized destinations are rejected before any data is transmitted.
By default, sandboxes have unrestricted internet access. When running untrusted or AI generated code, you can lock down the network to only the services your workload actually needs. A compromised or hallucinated code snippet cannot exfiltrate data or make unintended API calls, traffic to any domain not on your allowlist is blocked.
The modern internet runs on hostnames, not IP addresses, a handful of addresses serve thousands of domains. Traditional IP-based firewall rules can't precisely distinguish between them.
Host-based egress control typically requires an HTTP proxy, but that breaks non-HTTP protocols like Redis and Postgres. Instead, we built an SNI-peeking firewall that inspects the initial unencrypted bytes of a TLS handshake to extract the target hostname. Since nearly all internet traffic is TLS-encrypted today, this covers all relevant cases. For legacy or non-TLS systems, we do also support IP/CIDR-based rules as a fallback.
Policies can be updated dynamically on a running sandbox without restarting the process. Start with full internet access to install dependencies, lock it down before executing untrusted code, reopen to stream results after user approval, and then air gap again with deny-all, fully in one session:
import{ Sandbox }from'@vercel/sandbox';
const sandbox =await Sandbox.create();
// Phase 1: Open network, download everything we need
Vercel Flags is a feature flag provider built into the Vercel platform. It lets you create and manage feature flags with targeting rules, user segments, and environment controls directly in the Vercel Dashboard.
The Flags SDK provides a framework-native way to define and use these flags within Next.js and SvelteKit applications, integrating directly with your existing codebase:
flags.ts
import{ vercelAdapter }from"@flags-sdk/vercel"
import{ flag }from'flags/next';
exportconst showNewFeature =flag({
key:'show-new-feature',
decide:()=>false,
description:'Show the new dashboard redesign',
adapter:vercelAdapter()
});
And you can use them within your pages like:
app/page.tsx
import{ showNewFeature }from'~/flags';
exportdefaultasyncfunctionPage(){
const isEnabled =awaitshowNewFeature();
return isEnabled ?<NewDashboard/>:<OldDashboard/>
;}
For teams using other frameworks or custom backends, the Vercel Flags adapter supports the OpenFeature standard, allowing you to combine feature flags across various systems and maintain consistency in your flag management approach:
Vercel Flags is priced at $30 per 1 million flag requests ($0.00003 per event), where a flag request is any request to your application that reads the underlying flags configuration. A single request evaluating multiple feature flags of the same source project still counts as one flag request.
Vercel Flags is now in beta and available to teams on all plans.
The login experience now supports Sign in with Apple, enabling faster access for users with Apple accounts.
If your Apple account uses an Apple email (@icloud.com, @mac.com, @me.com, etc.) that matches your Vercel account's email, you can use the Apple button from the login screen and your accounts will be automatically linked.
If the emails don't match, you can manually connect your Apple account from your account settings once logged in.
The vercel logs command has been rebuilt with more powerful querying capabilities, designed with agent workflows in mind. You can now query historical logs across your projects and filter by specific criteria, such as project, deploymentID, requestID, and arbitrary strings, to find exactly what you need.
The updated command uses git context by default, automatically scoping logs to your current repository when run from a project directory. This makes it easy to debug issues during development without manually specifying project details.
Whether you're debugging a production issue or building automated monitoring workflows, the enhanced filtering gives you precise control over log retrieval across your Vercel projects.
Agents can now access runtime logs through Vercel's MCP server.
The get_runtime_logs tool lets agents retrieve Runtime Logs for a project or deployment. Runtime logs include logs generated by Vercel Functions invocations in preview and production deployments, including function output and console.log messages.
Toggle features in real time for specific users or cohorts
Roll out changes gradually using percentage-based rollouts
Run A/B tests to validate impact before a full release
This integration helps teams building on Vercel ship with more confidence. You can test in production, reduce release risk, and make data-driven decisions based on real user behavior, all within your existing Vercel workflows.
Create a flags.ts file with an identify function and a flag check:
flags.ts
import{ postHogAdapter }from'@flags-sdk/posthog'
import{ flag, dedupe }from'flags/next'
importtype{ Identify }from'flags'
exportconst identify =dedupe(async()=>({
distinctId:'user_distinct_id'// replace with real user ID
})) satisfies Identify<{ distinctId:string}>
exportconst myFlag =flag({
key:'my-flag',
adapter: postHogAdapter.isFeatureEnabled(),
identify,
})
Create a flags.ts file with a simple identify function and a function to check your flag
Check out the PostHog template to learn more about this integration.
When Vercel API credentials are accidentally committed to public GitHub repositories, gists and npm packages, Vercel now automatically revokes them to protect your account from unauthorized access.
When the exposed credentials are detected, you'll receive notifications and can review any discovered tokens and API keys in your dashboard. This detection is powered by GitHub secret scanning and brings an extra layer of security to all Vercel and v0 users.
As part of this change, we've also updated token and API key formats to make them visually identifiable. Each credential type now includes a prefix: