Claude 3.7 Sonnet
Claude 3.7 Sonnet is the first hybrid reasoning model, a single model that delivers near-instant responses or extended step-by-step thinking with API-controllable thinking token budgets up to 128K tokens, scoring 63.7% on SWE-bench Verified and showing strong gains in coding and front-end web development.
import { streamText } from 'ai'
const result = streamText({ model: 'anthropic/claude-3.7-sonnet', prompt: 'Why is the sky blue?'})Frequently Asked Questions
What is extended thinking mode and how do I control it?
Extended thinking triggers the model to reason step-by-step before producing its final response, with the reasoning process visible to the caller. You control the thinking token budget at the API level. Set it to any value from a small number up to 128K tokens (Anthropic's documented cap for that budget at launch), trading cost and latency for answer quality.
Are thinking tokens included in the standard per-token pricing?
This page lists the current rates. Multiple providers can serve Claude 3.7 Sonnet, so AI Gateway surfaces live pricing rather than a single fixed figure.
Why did Anthropic design reasoning as an integrated capability rather than a separate model?
Anthropic's philosophy is that reasoning should work like human reflection: the same system for both quick answers and deep analysis. Prompting patterns transfer between modes, so you don't need to maintain two separate model integrations.
How does extended thinking affect the model's math and science performance?
Extended thinking provides a notable boost on math and physics problems specifically, on top of baseline improvements in those areas from the standard model.
What coding benchmark did Claude 3.7 Sonnet achieve?
It scored 63.7% on SWE-bench Verified at the time of the January 25, 2024 launch (70.3% with enhanced compute), using minimal scaffolding with just a bash tool and file editing tool.
Does the extended thinking mode work differently for coding vs. reasoning tasks?
The thinking mode improves performance across coding, math, physics, instruction following, and other task types. Anthropic deliberately focused on real-world tasks rather than competition-style math problems.
What was Claude Code and how did it relate to this model?
Claude Code launched as a limited research preview alongside Claude 3.7 Sonnet. It's a command-line agentic coding tool that searches and reads code, edits files, writes and runs tests, and commits to GitHub. It became generally available with Claude Opus 4.