Meta.Llama 3.2 90b Vision Instruct
Meta.Llama 3.2 90b Vision Instruct is available via Oci with a 128K context window and up to 4,000 output tokens. Pricing: $2.00/1M input tokens, $2.00/1M output tokens.
Meta.Llama 3.2 90b Vision Instruct Pricing & Specifications
What is Meta.Llama 3.2 90b Vision Instruct?
Meta.Llama 3.2 90b Vision Instruct is a large language model by Oci with a 128K context window and up to 4,000 output tokens. It costs $2.00 per 1M input tokens and $2.00 per 1M output tokens. Meta.Llama 3.2 90b Vision Instruct is available via Oci with a 128K context window and up to 4,000 output tokens. Pricing: $2.00/1M input tokens, $2.00/1M output tokens.
Capabilities
text vision function calling
Meta.Llama 3.2 90b Vision Instruct Cost Examples
Short prompt (500 tokens)
$0.001000
Medium prompt (2K tokens)
$0.00400
Long output (4K tokens)
$0.00800
Count tokens for Meta.Llama 3.2 90b Vision Instruct
Paste your prompt to see exact token counts and API cost estimates.
Open Token CounterSimilar Models to Meta.Llama 3.2 90b Vision Instruct
Frequently Asked Questions
How much does Meta.Llama 3.2 90b Vision Instruct cost per token? +
Meta.Llama 3.2 90b Vision Instruct costs $2.00 per 1M input tokens and $2.00 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.003000.
What is the context window for Meta.Llama 3.2 90b Vision Instruct? +
Meta.Llama 3.2 90b Vision Instruct supports a context window of 128,000 tokens (128K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Meta.Llama 3.2 90b Vision Instruct? +
Meta.Llama 3.2 90b Vision Instruct can generate up to 4,000 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Meta.Llama 3.2 90b Vision Instruct good for coding tasks? +
Yes, Meta.Llama 3.2 90b Vision Instruct supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.