Claude Opus 4.7 launched on AI Gateway on April 16, 2026. The model is optimized for long-running, asynchronous agents that execute complex, multi-step tasks. Over the 4.6 baseline, it strengthens knowledge-worker capabilities where visual verification of outputs matters and improves programmatic tool-calling with image-processing libraries.
Two API additions define the release. First, taskBudget lets you set token limits that cap individual agentic turns, bounding runaway cost on open-ended autonomous work. Second, adaptive thinking gains an xhigh effort level that sits between high and the max ceiling, giving finer-grained control over how deeply the model reasons. Thinking content is no longer returned by default; configure the display option when you want to surface it.
Claude Opus 4.7 handles pixel-level data transcription from charts and figures, high-resolution images for computer use and screenshot analysis, and document parsing where detail accuracy matters. Structured memory across conversation turns keeps state reliable over extended sessions.
Through AI Gateway, Claude Opus 4.7 is available with the standard unified API, observability, and provider routing. Set the model to anthropic/claude-opus-4.7 in the AI SDK, Chat Completions API, Responses API, Messages API, or other API formats.