Menu

Capabilities

Last updated January 21, 2026

In addition to text generation, you can use AI Gateway to generate images, search the web, track requests with observability, monitor usage, and enforce data retention policies. These features work across providers through a unified API, so you don't need separate integrations for each provider.

  • Visual content apps: Generate product images, marketing assets, or UI mockups with Image Generation
  • Research assistants: Give models access to current information with Web Search
  • Production dashboards: Monitor costs, latency, and usage across all your AI requests with Observability
  • Compliant applications: Meet data privacy requirements with Zero Data Retention
  • Usage tracking: Check credit balances and look up generation details with the Usage API
CapabilityWhat it doesKey features
Image GenerationCreate images from text promptsMulti-provider support, edit existing images, multiple output formats
Web SearchAccess real-time web informationPerplexity search for any model, native provider search tools
ObservabilityMonitor and debug AI requestsRequest traces, token counts, latency metrics, spend tracking
Zero Data RetentionEnsure data privacy complianceDefault ZDR policy, per-request enforcement, provider agreements
Usage & BillingTrack credits and generationsCredit balance API, generation lookup, cost tracking

Generate images using AI models through a single API. Requests route to the best available provider, with authentication and response formatting handled automatically.

import { gateway } from '@ai-sdk/gateway';
import { experimental_generateImage as generateImage } from 'ai';
 
const { image } = await generateImage({
  model: gateway.imageModel('openai/dall-e-3'),
  prompt: 'A serene mountain landscape at sunset',
});

Supported providers include OpenAI (DALL-E), Google (Imagen), and multimodal LLMs with image capabilities. See the Image Generation docs for implementation details.

Enable AI models to search the web during conversations. This capability helps answer about current events, recent developments, or any topic requiring up-to-date information.

Two approaches are supported:

AI Gateway automatically logs every request with metrics you can view in the Vercel dashboard:

  • Requests by model: See which models your application uses most
  • Time to first token (TTFT): Monitor response latency
  • Token counts: Track input and output token usage
  • Spend: View costs broken down by model and time period

Access these metrics from the Observability tab at both team and project levels.

AI Gateway uses zero data retention by default—it permanently deletes your prompts and responses after requests complete. For applications with strict compliance requirements, you can also enforce ZDR at the provider level:

const result = await streamText({
  model: 'anthropic/claude-sonnet-4.5',
  prompt: 'Analyze this sensitive data...',
  providerOptions: {
    gateway: { zeroDataRetention: true },
  },
});

When zeroDataRetention is enabled, requests only route to providers with verified ZDR agreements. See the ZDR documentation for the list of compliant providers.


Was this helpful?

supported.