Skip to content
Perplexity

Codellama 34b Instruct

Codellama 34b Instruct is available via Perplexity with a 16K context window and up to 16,384 output tokens. Pricing: $0.3500/1M input tokens, $1.40/1M output tokens.

Codellama 34b Instruct Pricing & Specifications

Input Price$0.35 per 1M tokens
Output Price$1.40 per 1M tokens
Context Window16,384 tokens (16K)
Max Output16,384 tokens
ProviderPerplexity

What is Codellama 34b Instruct?

Codellama 34b Instruct is a large language model by Perplexity with a 16K context window and up to 16,384 output tokens. It costs $0.35 per 1M input tokens and $1.40 per 1M output tokens. Codellama 34b Instruct is available via Perplexity with a 16K context window and up to 16,384 output tokens. Pricing: $0.3500/1M input tokens, $1.40/1M output tokens.

Capabilities

text

Codellama 34b Instruct Cost Examples

Short prompt (500 tokens)

$0.000175

Medium prompt (2K tokens)

$0.00070

Long output (4K tokens)

$0.00560

Count tokens for Codellama 34b Instruct

Paste your prompt to see exact token counts and API cost estimates.

Open Token Counter

Similar Models to Codellama 34b Instruct

Perplexity

Llama 3.1 8b Instruct

$0.20/1M in 131K ctx

Perplexity

Sonar Medium Chat

$0.60/1M in 16K ctx

Perplexity

Mistral 7b Instruct

$0.070/1M in 4K ctx

Perplexity

Mixtral 8x7b Instruct

$0.070/1M in 4K ctx

Frequently Asked Questions

How much does Codellama 34b Instruct cost per token? +
Codellama 34b Instruct costs $0.35 per 1M input tokens and $1.40 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.001050.
What is the context window for Codellama 34b Instruct? +
Codellama 34b Instruct supports a context window of 16,384 tokens (16K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Codellama 34b Instruct? +
Codellama 34b Instruct can generate up to 16,384 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Codellama 34b Instruct good for coding tasks? +
Codellama 34b Instruct can handle basic coding tasks, but there are models specifically optimized for code generation that may perform better on complex programming problems.
Token Counter | Pricing Calculator | Model Comparison | All Perplexity Models