Skip to content

Token Counter

Count tokens for any LLM model. Exact token counts for OpenAI models via tiktoken, character-based estimates for Claude, Gemini, Llama, and Mistral. See cost per API call.

FreeNo SignupNo Server UploadsZero Tracking
Exact
0 chars

Tokens

0

Words

0

Characters

0

Input cost

$0.00

Output cost

$0.00

Using o200k_base encoding via js-tiktoken for exact token count. Pricing: $2.5/1M input, $10/1M output tokens.

Embed code
<iframe src="https://tokencalc.dev/embed/token-counter" width="100%" height="600" frameborder="0" title="Token Counter - tokencalc"></iframe>
<p style="font-size:12px;text-align:center;margin-top:4px;">
  <a href="https://tokencalc.dev/tools/token-counter" target="_blank" rel="noopener">Powered by tokencalc</a>
</p>
Attribution preview

Powered by tokencalc

How to Use Token Counter

  1. 1

    Enter your text

    Paste or type the text you want to count tokens for in the input area.

  2. 2

    Select a model

    Choose the LLM model you plan to use from the dropdown. OpenAI models use exact tiktoken encoding.

  3. 3

    Read the results

    View the token count, word count, character count, and estimated API cost for your text.

  4. 4

    Copy or adjust

    Copy the stats to your clipboard, or try different models to compare token counts.

Frequently Asked Questions

For OpenAI models (GPT-4o, GPT-4 Turbo, o1, o3-mini), we use the official tiktoken encodings for exact counts. For Claude, Gemini, Llama, and Mistral models, we estimate tokens using character count divided by 4, which is approximately 97% accurate for English text.

No. All token counting happens entirely in your browser using JavaScript. Your text never leaves your device.

A token is a chunk of text that LLMs process. Tokens can be as short as one character or as long as a full word. On average, one token is about 4 characters or 0.75 words in English.

Different models use different tokenizer vocabularies (called encodings). GPT-4o uses o200k_base encoding while GPT-4 Turbo uses cl100k_base. These break text into tokens differently.

Cost is calculated as (token_count / 1,000,000) * price_per_million_tokens. We show both input and output cost since most providers charge differently for each.