Mistral 7B Instruct V0.3
Mistral 7B Instruct V0.3 is available via Ovhcloud with a 127K context window and up to 127,000 output tokens. Pricing: $0.1000/1M input tokens, $0.1000/1M output tokens.
Mistral 7B Instruct V0.3 Pricing & Specifications
What is Mistral 7B Instruct V0.3?
Mistral 7B Instruct V0.3 is a large language model by Ovhcloud with a 127K context window and up to 127,000 output tokens. It costs $0.10 per 1M input tokens and $0.10 per 1M output tokens. Mistral 7B Instruct V0.3 is available via Ovhcloud with a 127K context window and up to 127,000 output tokens. Pricing: $0.1000/1M input tokens, $0.1000/1M output tokens.
Capabilities
text function calling json mode
Mistral 7B Instruct V0.3 Cost Examples
Short prompt (500 tokens)
$0.000050
Medium prompt (2K tokens)
$0.00020
Long output (4K tokens)
$0.00040
Count tokens for Mistral 7B Instruct V0.3
Paste your prompt to see exact token counts and API cost estimates.
Open Token CounterSimilar Models to Mistral 7B Instruct V0.3
Frequently Asked Questions
How much does Mistral 7B Instruct V0.3 cost per token? +
Mistral 7B Instruct V0.3 costs $0.10 per 1M input tokens and $0.10 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000150.
What is the context window for Mistral 7B Instruct V0.3? +
Mistral 7B Instruct V0.3 supports a context window of 127,000 tokens (127K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Mistral 7B Instruct V0.3? +
Mistral 7B Instruct V0.3 can generate up to 127,000 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Mistral 7B Instruct V0.3 good for coding tasks? +
Yes, Mistral 7B Instruct V0.3 supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.