GLM 4.7
GLM 4.7 is Z.ai's model released December 22, 2025 with major improvements in coding, tool usage, and multi-step reasoning. It uses a more natural conversational tone and shows improved frontend development results.
import { streamText } from 'ai'
const result = streamText({ model: 'zai/glm-4.7', prompt: 'Why is the sky blue?'})Frequently Asked Questions
What are the main improvements in GLM 4.7 over previous GLM models?
Coding, tool usage, and multi-step reasoning. It also uses a more natural conversational tone and shows improved frontend development results in this generation.
What is the difference between GLM-4.7, GLM-4.7-Flash, and GLM-4.7-FlashX?
GLM-4.7 is the full-scale variant with maximum capability. GLM-4.7-Flash is optimized for faster inference with reduced capability. GLM-4.7-FlashX provides the fastest inference tier in the generation. All share the same API surface.
Is GLM 4.7 good for frontend development?
Yes. Z.ai cites improved frontend development results as a key change in this model generation.
What is the context window for GLM 4.7?
204.8K tokens.
How do I authenticate with GLM 4.7 through AI Gateway?
AI Gateway provides a unified API key. Configure it in your environment and use the model identifier to route requests. No separate Z.ai account is required, though BYOK is also supported.
How does GLM 4.7 handle multi-step agentic tasks?
Multi-step reasoning is a core improvement in this generation. The model maintains better coherence across extended tool-use sequences and planning steps compared to earlier GLM models.
What providers serve GLM 4.7 through AI Gateway?
GLM 4.7 is available through zai, novita, deepinfra, cerebras, bedrock. AI Gateway handles routing and automatic retries across providers.