o3-deep-research
o3-deep-research is a specialized variant of o3 designed for deep research tasks, capable of extended multi-step investigation, source synthesis, and comprehensive analysis on complex questions that require thorough exploration.
import { streamText } from 'ai'
const result = streamText({ model: 'openai/o3-deep-research', prompt: 'Why is the sky blue?'})What To Consider When Choosing a Provider
Zero Data Retention
AI Gateway does not currently support Zero Data Retention for this model. See the documentation for models that support ZDR.Authentication
AI Gateway authenticates requests using an API key or OIDC token. You do not need to manage provider credentials directly.
o3-deep-research is designed for tasks that require thorough research rather than quick answers. It spends significantly more time per request, conducting multi-step investigation before producing results.
This is not a general chat model. It's built for deep analytical questions that benefit from extended exploration and synthesis.
When to Use o3-deep-research
Best For
Literature review:
Comprehensive analysis of research topics with source synthesis
Due diligence research:
Thorough investigation of companies, technologies, or markets
Complex question answering:
Questions that require exploring multiple angles and synthesizing diverse information
Investigative analysis:
Multi-step reasoning that builds on intermediate findings
Report generation:
Producing comprehensive reports that require deep exploration of a topic
Consider Alternatives When
Quick answers:
Standard o3 for reasoning tasks that don't require extended investigation
General chat:
GPT-5 or GPT-5.2 for conversational interactions
Real-time responses:
Any standard model when response time is constrained
Simple lookups:
Search-augmented models for straightforward information retrieval
Conclusion
o3-deep-research brings automated deep research capability to AI Gateway, conducting the kind of thorough, multi-step investigation that complex questions demand. For analytical workloads where comprehensiveness matters more than speed, it provides a level of depth that standard models cannot match.
FAQ
It conducts extended multi-step investigation rather than applying standard chain-of-thought reasoning. Responses take longer but are more comprehensive, synthesizing information from multiple angles.
Significantly longer than standard models, potentially minutes rather than seconds. The extended time reflects the depth of investigation performed.
Complex questions that benefit from exploring multiple perspectives, synthesizing diverse sources, and building comprehensive analyses. Simple factual queries don't need this model.
AI Gateway accepts a single API key or OIDC token for all requests. You don't embed OpenAI credentials in your application; AI Gateway routes and authenticates on your behalf.
No. Its extended processing time makes it unsuitable for real-time applications. Use it for background research tasks where thoroughness outweighs response time.
This page shows live performance metrics. Expect significantly longer response times than standard models due to the extended research process.