Skip to content

Claude 3.5 Sonnet (2024-06-20)

Claude 3.5 Sonnet (2024-06-20) (June 2024) delivered strong intelligence at twice the speed of Claude 3 Opus, setting new industry benchmarks on graduate-level reasoning and coding proficiency while matching Opus-tier capability at Sonnet pricing, the first release in Anthropic's Claude 3.5 model family.

File InputTool UseVision (Image)Explicit Caching
index.ts
import { streamText } from 'ai'
const result = streamText({
model: 'anthropic/claude-3.5-sonnet-20240620',
prompt: 'Why is the sky blue?'
})

Playground

Try out Claude 3.5 Sonnet (2024-06-20) by Anthropic. Usage is billed to your team at API rates. Free users (those who haven't made a payment) get $5 of credits every 30 days.

About Claude 3.5 Sonnet (2024-06-20)

Claude 3.5 Sonnet (2024-06-20) launched on June 20, 2024 as Anthropic's first Claude 3.5 release, reporting stronger benchmark results than Claude 3 Opus across graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It ran at twice the speed of Claude 3 Opus while delivering higher intelligence benchmarks at Sonnet-tier cost and latency.

For coding, Anthropic reported that in an internal agentic coding evaluation, Claude 3.5 Sonnet (2024-06-20) solved 64% of problems versus Claude 3 Opus's 38%. The model could independently write, edit, and execute code with sophisticated reasoning and troubleshooting when given the right tools. Code translation (migrating legacy codebases, updating to newer frameworks) was a highlighted strength.

Vision was another meaningful upgrade. Anthropic described Claude 3.5 Sonnet (2024-06-20) as their strongest vision model at the time, surpassing Claude 3 Opus on standard vision benchmarks. Improvements were most noticeable in visual reasoning tasks like interpreting charts and graphs. The model could also accurately transcribe text from imperfect images. Anthropic called out retail, logistics, and financial services as sectors where this mattered.

Providers

Route requests across multiple providers. Copy a provider slug to set your preference. Visit the docs for more info. Using a provider means you agree to their terms, listed under Legal.

Provider
Context
Latency
Throughput
Input
Output
Cache
Web Search
Per Query
Capabilities
ZDR
No Training
Release Date
Amazon Bedrock
Legal:Terms
Privacy
200K
$3.00/M$15.00/M
06/20/2024
Google Vertex
Legal:Terms
Privacy
200K
$3.00/M$15.00/M
Read:$0.3/M
Write:$3.75/M
06/20/2024
Throughput

P50 throughput on live AI Gateway traffic, in tokens per second (TPS). Visit the docs for more info.

Latency

P50 time to first token (TTFT) on live AI Gateway traffic, in milliseconds. View the docs for more info.

Uptime

Direct request success rate on AI Gateway and per-provider. Visit the docs for more info.

More models by Anthropic

Model
Context
Latency
Throughput
Input
Output
Cache
Web Search
Per Query
Capabilities
Providers
ZDR
No Training
Release Date
1M
1.3s
100tps
$5.00/M$25.00/M
Read:$0.5/M
Write:
$6.25/M
$10/K
+ input costs
anthropic logo
bedrock logo
vertexAnthropic logo
04/16/2026
1M
0.6s
58tps
$3.00/M$15.00/M
Read:$0.3/M
Write:
$3.75/M
$10/K
+ input costs
anthropic logo
bedrock logo
vertexAnthropic logo
02/17/2026
1M
0.8s
59tps
$5.00/M$25.00/M
Read:$0.5/M
Write:
$6.25/M
$10/K
+ input costs
anthropic logo
bedrock logo
vertexAnthropic logo
02/05/2026
200K
0.4s
114tps
$1.00/M$5.00/M
Read:$0.1/M
Write:
$1.25/M
$10.00/K
+ input costs
anthropic logo
bedrock logo
vertexAnthropic logo
10/15/2025
1M
0.7s
55tps
$3.00/M
$15.00/M
Read:
$0.3/M
Write:
$3.75/M
$10.00/K
+ input costs
anthropic logo
bedrock logo
vertexAnthropic logo
09/29/2025
200K
0.6s
50tps
$5.00/M$25.00/M
Read:$0.5/M
Write:
$6.25/M
$10.00/K
+ input costs
anthropic logo
bedrock logo
vertexAnthropic logo
11/24/2024

What To Consider When Choosing a Provider

  • Configuration: This is a dated model checkpoint. Teams choosing it deliberately typically need consistency across regression testing or production pipelines where behavior stability matters more than subsequent model improvements.
  • Zero Data Retention: AI Gateway supports Zero Data Retention for this model via direct gateway requests (BYOK is not included). To configure this, check the documentation.
  • Authentication: AI Gateway authenticates requests using an API key or OIDC token. You do not need to manage provider credentials directly.

When to Use Claude 3.5 Sonnet (2024-06-20)

Best For

  • Production pipelines requiring a stable, pinned model checkpoint: Teams that built and validated against the June 2024 behavior and need to maintain that consistency
  • Agentic coding workflows: The model writes, edits, and executes code, the 64% internal evaluation score was a substantial improvement over Opus at the time
  • Complex customer support: Requiring context-sensitive, multi-step reasoning at 2x Opus speeds
  • Visual reasoning tasks: Chart interpretation, graph analysis, imperfect image transcription for logistics or financial documents
  • Code translation and legacy modernization: Projects where the model handles cross-language migration

Consider Alternatives When

  • No pinned version needed: The October 2024 upgrade (anthropic/claude-3.5-sonnet) includes computer use and improved coding benchmarks
  • Computer use required: This version does not include computer use capabilities
  • Extended thinking mode: Hard reasoning support arrived with Claude 3.7 Sonnet
  • New project greenfield: The October 2024 checkpoint or later generations generally serve new builds better

Conclusion

The June 2024 Claude 3.5 Sonnet (2024-06-20) checkpoint occupies a specific niche: the model that first moved intelligence-per-cost past the previous Opus ceiling. It remains available for teams whose production systems depend on stable, reproducible behavior from that checkpoint. For new work, later releases build directly on this foundation.

Frequently Asked Questions

  • Why would I use this -20240620 checkpoint instead of anthropic/claude-3.5-sonnet (October 2024)?

    Production systems validated against the June 2024 behavior sometimes need a pinned checkpoint to avoid unexpected behavioral drift. Regression test suites, evaluation harnesses, or contracts specifying a particular model version are common reasons.

  • How did Claude 3.5 Sonnet (2024-06-20) (June 2024) compare to Claude 3 Opus on benchmarks?

    Anthropic reported it outperformed Claude 3 Opus on GPQA (graduate-level reasoning), MMLU (undergraduate knowledge), and HumanEval (coding proficiency), operating at twice Opus's speed and at Sonnet-tier cost.

  • What was the internal agentic coding evaluation result?

    3.5 Sonnet solved 64% of coding problems in Anthropic's internal evaluation, fixing bugs and adding features to open-source codebases given natural language descriptions. Claude 3 Opus scored 38%.

  • Does this version have computer use?

    No. Computer use was introduced in the October 2024 upgrade (the non-dated claude-3.5-sonnet). This June 2024 checkpoint predates that capability.

  • What vision improvements did this model introduce over Claude 3?

    Anthropic described it as outperforming Claude 3 Opus on standard vision benchmarks at the time, with particularly strong gains in chart and graph interpretation and in transcribing text from imperfect images.

  • What does the context window of 200K tokens enable for this model?

    You can process entire large documents, codebases, or conversation histories in a single pass. Anthropic highlighted context-sensitive customer support across long histories and multi-file code analysis as key use cases.