Skip to content
Ovhcloud

Mistral Small 3.2 24B Instruct 2506

Mistral Small 3.2 24B Instruct 2506 is available via Ovhcloud with a 128K context window and up to 128,000 output tokens. Pricing: $0.0900/1M input tokens, $0.2800/1M output tokens.

Mistral Small 3.2 24B Instruct 2506 Pricing & Specifications

Input Price$0.090 per 1M tokens
Output Price$0.28 per 1M tokens
Context Window128,000 tokens (128K)
Max Output128,000 tokens
ProviderOvhcloud

What is Mistral Small 3.2 24B Instruct 2506?

Mistral Small 3.2 24B Instruct 2506 is a large language model by Ovhcloud with a 128K context window and up to 128,000 output tokens. It costs $0.090 per 1M input tokens and $0.28 per 1M output tokens. Mistral Small 3.2 24B Instruct 2506 is available via Ovhcloud with a 128K context window and up to 128,000 output tokens. Pricing: $0.0900/1M input tokens, $0.2800/1M output tokens.

Capabilities

text vision function calling json mode

Mistral Small 3.2 24B Instruct 2506 Cost Examples

Short prompt (500 tokens)

$0.000045

Medium prompt (2K tokens)

$0.00018

Long output (4K tokens)

$0.00112

Count tokens for Mistral Small 3.2 24B Instruct 2506

Paste your prompt to see exact token counts and API cost estimates.

Open Token Counter

Similar Models to Mistral Small 3.2 24B Instruct 2506

Ovhcloud

Qwen3 32B

$0.080/1M in 32K ctx

Ovhcloud

Gpt Oss 120b

$0.080/1M in 131K ctx

Ovhcloud

Llama 3.1 8B Instruct

$0.10/1M in 131K ctx

Ovhcloud

Mistral 7B Instruct V0.3

$0.10/1M in 127K ctx

Frequently Asked Questions

How much does Mistral Small 3.2 24B Instruct 2506 cost per token? +
Mistral Small 3.2 24B Instruct 2506 costs $0.090 per 1M input tokens and $0.28 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000230.
What is the context window for Mistral Small 3.2 24B Instruct 2506? +
Mistral Small 3.2 24B Instruct 2506 supports a context window of 128,000 tokens (128K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Mistral Small 3.2 24B Instruct 2506? +
Mistral Small 3.2 24B Instruct 2506 can generate up to 128,000 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Mistral Small 3.2 24B Instruct 2506 good for coding tasks? +
Yes, Mistral Small 3.2 24B Instruct 2506 supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.
Token Counter | Pricing Calculator | Model Comparison | All Ovhcloud Models