Codellama 70b Instruct
Codellama 70b Instruct is available via Perplexity with a 16K context window and up to 16,384 output tokens. Pricing: $0.7000/1M input tokens, $2.80/1M output tokens.
Codellama 70b Instruct Pricing & Specifications
What is Codellama 70b Instruct?
Codellama 70b Instruct is a large language model by Perplexity with a 16K context window and up to 16,384 output tokens. It costs $0.70 per 1M input tokens and $2.80 per 1M output tokens. Codellama 70b Instruct is available via Perplexity with a 16K context window and up to 16,384 output tokens. Pricing: $0.7000/1M input tokens, $2.80/1M output tokens.
Capabilities
text
Codellama 70b Instruct Cost Examples
Short prompt (500 tokens)
$0.000350
Medium prompt (2K tokens)
$0.00140
Long output (4K tokens)
$0.01120
Count tokens for Codellama 70b Instruct
Paste your prompt to see exact token counts and API cost estimates.
Open Token CounterSimilar Models to Codellama 70b Instruct
Frequently Asked Questions
How much does Codellama 70b Instruct cost per token? +
Codellama 70b Instruct costs $0.70 per 1M input tokens and $2.80 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.002100.
What is the context window for Codellama 70b Instruct? +
Codellama 70b Instruct supports a context window of 16,384 tokens (16K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Codellama 70b Instruct? +
Codellama 70b Instruct can generate up to 16,384 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Codellama 70b Instruct good for coding tasks? +
Codellama 70b Instruct can handle basic coding tasks, but there are models specifically optimized for code generation that may perform better on complex programming problems.