Llava V1.6 Mistral 7b Hf
Llava V1.6 Mistral 7b Hf is available via Ovhcloud with a 32K context window and up to 32,000 output tokens. Pricing: $0.2900/1M input tokens, $0.2900/1M output tokens.
Llava V1.6 Mistral 7b Hf Pricing & Specifications
What is Llava V1.6 Mistral 7b Hf?
Llava V1.6 Mistral 7b Hf is a large language model by Ovhcloud with a 32K context window and up to 32,000 output tokens. It costs $0.29 per 1M input tokens and $0.29 per 1M output tokens. Llava V1.6 Mistral 7b Hf is available via Ovhcloud with a 32K context window and up to 32,000 output tokens. Pricing: $0.2900/1M input tokens, $0.2900/1M output tokens.
Capabilities
text vision json mode
Llava V1.6 Mistral 7b Hf Cost Examples
Short prompt (500 tokens)
$0.000145
Medium prompt (2K tokens)
$0.00058
Long output (4K tokens)
$0.00116
Count tokens for Llava V1.6 Mistral 7b Hf
Paste your prompt to see exact token counts and API cost estimates.
Open Token CounterSimilar Models to Llava V1.6 Mistral 7b Hf
Frequently Asked Questions
How much does Llava V1.6 Mistral 7b Hf cost per token? +
Llava V1.6 Mistral 7b Hf costs $0.29 per 1M input tokens and $0.29 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000435.
What is the context window for Llava V1.6 Mistral 7b Hf? +
Llava V1.6 Mistral 7b Hf supports a context window of 32,000 tokens (32K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Llava V1.6 Mistral 7b Hf? +
Llava V1.6 Mistral 7b Hf can generate up to 32,000 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Llava V1.6 Mistral 7b Hf good for coding tasks? +
Llava V1.6 Mistral 7b Hf can handle basic coding tasks, but there are models specifically optimized for code generation that may perform better on complex programming problems.