Kimi K2 Thinking 251104
Kimi K2 Thinking 251104 is available via Volcengine with a 229K context window and up to 32,768 output tokens. Pricing: $0.000000/1M input tokens, $0.000000/1M output tokens.
Kimi K2 Thinking 251104 Pricing & Specifications
What is Kimi K2 Thinking 251104?
Kimi K2 Thinking 251104 is a large language model by Volcengine with a 229K context window and up to 32,768 output tokens. It costs $0.000 per 1M input tokens and $0.000 per 1M output tokens. Kimi K2 Thinking 251104 is available via Volcengine with a 229K context window and up to 32,768 output tokens. Pricing: $0.000000/1M input tokens, $0.000000/1M output tokens.
Capabilities
text function calling reasoning
Kimi K2 Thinking 251104 Cost Examples
Short prompt (500 tokens)
$0.000000
Medium prompt (2K tokens)
$0.00000
Long output (4K tokens)
$0.00000
Count tokens for Kimi K2 Thinking 251104
Paste your prompt to see exact token counts and API cost estimates.
Open Token CounterSimilar Models to Kimi K2 Thinking 251104
Frequently Asked Questions
How much does Kimi K2 Thinking 251104 cost per token? +
Kimi K2 Thinking 251104 costs $0.000 per 1M input tokens and $0.000 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000000.
What is the context window for Kimi K2 Thinking 251104? +
Kimi K2 Thinking 251104 supports a context window of 229,376 tokens (229K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Kimi K2 Thinking 251104? +
Kimi K2 Thinking 251104 can generate up to 32,768 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Kimi K2 Thinking 251104 good for coding tasks? +
Yes, Kimi K2 Thinking 251104 supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.