Zai.Glm 4.7 Flash
Zai.Glm 4.7 Flash is available via AWS Bedrock with a 200K context window and up to 128,000 output tokens. Pricing: $0.0700/1M input tokens, $0.4000/1M output tokens.
Zai.Glm 4.7 Flash Pricing & Specifications
What is Zai.Glm 4.7 Flash?
Zai.Glm 4.7 Flash is a large language model by AWS Bedrock with a 200K context window and up to 128,000 output tokens. It costs $0.070 per 1M input tokens and $0.40 per 1M output tokens. Zai.Glm 4.7 Flash is available via AWS Bedrock with a 200K context window and up to 128,000 output tokens. Pricing: $0.0700/1M input tokens, $0.4000/1M output tokens.
Capabilities
text function calling reasoning
Zai.Glm 4.7 Flash Cost Examples
Short prompt (500 tokens)
$0.000035
Medium prompt (2K tokens)
$0.00014
Long output (4K tokens)
$0.00160
Count tokens for Zai.Glm 4.7 Flash
Paste your prompt to see exact token counts and API cost estimates.
Open Token CounterSimilar Models to Zai.Glm 4.7 Flash
Frequently Asked Questions
How much does Zai.Glm 4.7 Flash cost per token? +
Zai.Glm 4.7 Flash costs $0.070 per 1M input tokens and $0.40 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000270.
What is the context window for Zai.Glm 4.7 Flash? +
Zai.Glm 4.7 Flash supports a context window of 200,000 tokens (200K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Zai.Glm 4.7 Flash? +
Zai.Glm 4.7 Flash can generate up to 128,000 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Zai.Glm 4.7 Flash good for coding tasks? +
Yes, Zai.Glm 4.7 Flash supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.