State of AI

OpenAI logo
Azure logo
Anthropic logo
AWS logo
Google logo
Perplexity logo
Deepseek logo
Grok logo

00.01 A survey for AI application builders

The best way to understand a tool's current state, future direction, and user impact is to ask those who build with it daily. That's why this survey targeted app builders. They’re tackling today's challenges, spotting tomorrow's opportunities, and working to maximize AI's value for users.

Intro

OpenAI is leading model adoption, but competition is catching up

While OpenAI remains the primary choice with 88% adoption, developers maintain relationships with multiple providers — two on average. As providers race to complete, developer loyalty is tested with 65% switching providers within six months.

Key Insights

  • Leverage abstractions that prevent provider lock-in and facilitate switching
  • Review provider performance quarterly against your specific use cases
  • Stay connected to rapid market changes through direct developer communities

Builder's Takeaway

Start small, stay flexible

Architect your systems for provider mobility and model portability. Small teams are leading AI implementation, proving you don't need massive resources to succeed. The winning strategy is to build a multi-provider foundation that allows you to quickly pivot to the best solutions for your use cases as new features and capabilities emerge.

Bar graph titled "Which LLM providers do you currently use?". Bars: OpenAI: 87% Anthropic: 68% Google: 63% DeepSeek: 29% xAI: 23% Groq: 22% Meta: 12% Microsoft: 10%.

2 Average number of providers used

40% Software as primary industry focus

Bar graph titled "What provider did you switch to in the last six months?". Bars: Google: 31% Anthropic: 29% Groq: 11%.
Donut chart showing that 60% of survey respondents switched llm provider in the last six months.
Bar graph titled "Most used inference providers". Bars: OpenAI: 83% Anthropic: 45% Google Cloud: 28%.

Intro

We are past the phase of adding AI for hype

As AI adoption matures, there's a shift from basic AI integrations to real value creation. A majority of teams are using vector databases, building strong foundations for more sophisticated applications. They're prioritizing core features over chatbots, and making AI an essential part of their product's DNA.

Key Insights

  • The gap between chatbots (44%) and product features (79%) reveals a shift toward deeper AI integration
  • Vector database adoption (70%) signals maturing AI infrastructure
  • Website personalization (24%) remains underexplored, hinting at future opportunities

Builder's Takeaway

Focus on how to solve fundamental product challenges with AI

While the market is young with plenty of untapped opportunities, the era of quick wins is over. Users expect more from AI now. Ask deeper questions about how AI can enhance every aspect of user experience, and make it central to your product development, not just an add-on.

"AI is dissolving the boundaries between roles. We’re seeing new product designers blend UX, UI, and code in one creative flow—thanks to tools like Vercel, v0, Uizard, and Cursor. Whether junior or senior leader, anyone can now build, test, and ship ideas independently—and that’s not just efficient, it’s liberating."
Nicolas Le Pallec

Nicolas Le Pallec

CTO, EMEA — AKQA

Bar graph titled "Development stage of AI applications". Bars: Basic implementation: 42% Multiple prod cases: 25% Ad hoc prod usage: 23%.
Donut chart showing that 71% of survey respondents using a vector database.

70% Running 1-2 applications in production

Bar graph titled "Primary application types". Bars: Customer facing: 43% Both equally: 33% Internal tools: 24%.
Bar graph titled "Most common customer-facing features". Bars: Product AI: 75% Knowledge base / Q&A: 60% Support chatbot: 39% Website personalization: 27%.

Intro

Today's teams build high-demand AI systems through smart technical choices, not big budgets

AI teams build powerful systems on lean budgets, spending under $1,000/month. They skip costly training by using RAG, smart data sourcing, and cloud platforms to ship fast, reliable models without heavy infrastructure.

Key Insights

  • Teams optimize costs through smart architecture and augmented generation rather than custom model training
  • Weekly model updates are becoming standard, pointing to rapid iteration practices supported by providers like Vercel
  • Most teams pair manual testing with experience-based releases, but metric-driven evaluation is emerging

Builder's Takeaway

Leverage existing resources to ship quickly and efficiently

The winning pattern combines cloud platforms for deployment speed, existing models for reliability, and modern frameworks for rapid development—letting teams ship AI without infrastructure overhead.

Deployment Speed

Infrastructure choices drive velocity

Modern platforms handle deployment complexity, while TypeScript and AI SDK speed up development. Weekly model updates are becoming standard.

Donut chart showing that 60% of survey respondents host on vercel.

63% Primarily using JavaScript/TypeScript

Bar graph titled "How often do you deploy AI model updates?". Bars: Weekly: 34% Monthly: 31%.

24% Use AWS to host product infrastructure, 10% or less use other providers

Bar graph titled "Number of models in production". Bars: 2-3 models: 47% 1 model: 36% 4+ models: 18%.

Testing and Evaluation

Manual testing remains common, but metric-driven evaluation signals growing sophistication in quality assurance.

70% Use manual testing to evaluate model outputs

30% Use LangSmith as their observability provider

34% Use automated evals

15% Don't have formal evaluation

Bar graph titled "Who is your third-party eval provider?". Bars: LangSmith: 40% Other: 37% Braintrust: 13%.
Bar graph titled "Release process nature". Bars: Experience-based: 54% Hybrid approach: 27% Metrics-driven: 19%.

Accuracy and Customization

Teams solve accuracy challenges through data strategy, not spending

Teams combine public datasets, web scraping, and customer data with RAG customization to deliver precise outputs without the overhead of custom model training.

86% Don't train their own models

Bar graph titled "Top technical challenges in building AI features". Bars: Accuracy / hallucinations: 60% Latency / Performance: 23% Cost management: 23%.
Bar graph titled "Model customization strategy". Bars: RAG / Vector databases: 60% No customization: 20% Fine-tuning: 12%.
Donut chart showing that 70% of survey respondents use manual testing to evaluate model outputs.
Bar graph titled "Most used data sources to train/enhance AI models". Bars: Proprietary data: 48% Customer data: 44% Public datasets: 40% Web scraping: 40% Synthetic data: 23%.
Types of documents processed for RAG

60% Markdown and Text

60% PDFs

42% Web pages

41% Database records

22% Code repositories

10% Other

Intro

Tools over hiring

Many dedicate meaningful tech budget (over 15%) to AI, but aren't planning to grow dedicated AI teams. They're finding ways to build sophisticated AI features by empowering existing teams with better tools and clear objectives.

Key Insights

  • Teams choose lean integration over specialized departments
  • Existing product teams drive AI innovation
  • Priorities are being balanced between building new features and scaling existing ones

Builder's Takeaway

AI development is entering a pragmatic phase

Building successful AI features doesn't necessarily require a specialized department. Focus on high-impact use cases while keeping team structure lean. The opportunity lies in finding where AI adds real value and executing efficiently with available resources.

"At BCG X, we are excited about the transformative potential of AI and the ecosystem of AI-powered tools reshaping creativity, digital solutions, and customer experiences. By embracing cutting-edge AI technologies we're empowering our teams to work smarter and faster."
Dr. Jan Ittner

Dr. Jan Ittner

Global Engineering Chapter Lead — BCG X

Bar graph titled "Top AI priorities for the next 6 months". Bars: Launching new use cases: 54% Scaling existing solutions: 42%.
Bar graph titled "Budget allocation for AI (% of tech budget)". Bars: 5-15% budget: 37% <5% budget: 27% 16-30% budget: 24%.
Bar graph titled "AI team structure". Bars: No dedicated AI team: 45% Other: 29% Embedded in product teams: 27%.
Bar graph titled "AI leadership structure". Bars: No specific AI leadership: 57% Part of tech leadership: 31% Dedicated AI executives: 12%.

Intro

The AI market has found its sweet spot between hype and real impact

Teams think current AI tools are overhyped, but they expect AI to significantly impact their industry within 12 months. They're excited about the future but grounded in the present.

Key Insights

  • Teams believe in AI's future while staying realistic about current tools
  • Open source and fine-tuning are proving useful, but aren't game-changers yet
  • Everyone's preparing for major advancements in the next year

Builder's Takeaway

Build with what works now, but design for what's coming

Big changes are ahead, even if we're not there yet. Be aware of the current limitations and challenges, while remaining optimistic about the transformative potential of AI.

Survey respondents gave an average rating of 6.5 out of 10 when asked if "Open source models are production-ready"

Survey respondents gave an average rating of 5.3 out of 10 when asked if "Fine tuning provides meaningful improvements"

Survey respondents gave an average rating of 6.4 out of 10 when asked if "Current AI tools are overhyped"

Survey respondents gave an average rating of 7.7 out of 10 when asked if "AI will impact my industry within 12 months"

Conclusion

The AI landscape is evolving fast

There is strong belief in AI’s potential. While OpenAI is the leading provider, developers are actively testing alternatives and focusing on real-world value. Priorities are shifting toward customer-facing features, but challenges like model accuracy and cost remain key concerns. Success requires careful evaluation, strategic planning, and a flexible implementation to adapt to changes. We look forward to continued innovations this year and beyond.

Survey methodology

  • Objective: Understanding what people are building with AI and how they're building it
  • We shared two surveys, one in Q4 of 2024 and one in Q1 of 2025 to map how responses change over time
  • 656 applications builders were surveyed, sourced from the Vercel community, X.com, and AI community newsletters

Vercel AI

Everything you need to get started with AI

Vercel provides you the tools and infrastructure to deploy secure, high-performance AI applications.

AI SDK

Build conversational streaming user interfaces in JavaScript and TypeScript.

v0

Transform prompts into working applications and user interfaces.

Marketplace

Discover, integrate, and manage third-party AI services directly within Vercel projects.

Templates

Discover templates you can use as a jumpstart for your Vercel projects.