Openai.Gpt Oss Safeguard 120b
Openai.Gpt Oss Safeguard 120b is available via Bedrock Mantle with a 131K context window and up to 65,536 output tokens. Pricing: $0.1500/1M input tokens, $0.6000/1M output tokens.
Openai.Gpt Oss Safeguard 120b Pricing & Specifications
What is Openai.Gpt Oss Safeguard 120b?
Openai.Gpt Oss Safeguard 120b is a large language model by Bedrock Mantle with a 131K context window and up to 65,536 output tokens. It costs $0.15 per 1M input tokens and $0.60 per 1M output tokens. Openai.Gpt Oss Safeguard 120b is available via Bedrock Mantle with a 131K context window and up to 65,536 output tokens. Pricing: $0.1500/1M input tokens, $0.6000/1M output tokens.
Capabilities
text function calling reasoning json mode
Openai.Gpt Oss Safeguard 120b Cost Examples
Short prompt (500 tokens)
$0.000075
Medium prompt (2K tokens)
$0.00030
Long output (4K tokens)
$0.00240
Count tokens for Openai.Gpt Oss Safeguard 120b
Paste your prompt to see exact token counts and API cost estimates.
Open Token CounterSimilar Models to Openai.Gpt Oss Safeguard 120b
Frequently Asked Questions
How much does Openai.Gpt Oss Safeguard 120b cost per token? +
Openai.Gpt Oss Safeguard 120b costs $0.15 per 1M input tokens and $0.60 per 1M output tokens. For a typical 1,000-token request with a 500-token response, that works out to roughly $0.000450.
What is the context window for Openai.Gpt Oss Safeguard 120b? +
Openai.Gpt Oss Safeguard 120b supports a context window of 131,072 tokens (131K). This determines the maximum combined length of your prompt and conversation history in a single API call.
What is the maximum output length for Openai.Gpt Oss Safeguard 120b? +
Openai.Gpt Oss Safeguard 120b can generate up to 65,536 tokens in a single response. If you need longer outputs, you can make multiple API calls and concatenate the results.
Is Openai.Gpt Oss Safeguard 120b good for coding tasks? +
Yes, Openai.Gpt Oss Safeguard 120b supports capabilities well-suited for coding tasks including code generation, debugging, and refactoring.