tcount is a simple CLI tool written in Rust that counts OpenAI-model tokens in a given text and estimates cost based on a built-in pricing table (entered on 2025-05-08). It uses the tiktoken_rust library for byte-pair-encoding (BPE) tokenisation and clap for argument parsing.
- Count tokens for any text passed as an argument, via file, or through stdin.
- Support for model aliases:
latest→gpt-4.1latest-mini→gpt-4.1-minilatest-nano→gpt-4.1-nano
- Fallback to explicit BPE encoding if automatic model-to-tokeniser mapping fails.
- Cost estimation based on per-model pricing for input and output tokens.
- Customisable output-token estimate ratio.
| Model | Input ($/1M) | Cached Input ($/1M) | Output ($/1M) |
|---|---|---|---|
| gpt-4.1 | 2.00 | 0.50 | 8.00 |
| gpt-4.1-mini | 0.40 | 0.10 | 1.60 |
| gpt-4.1-nano | 0.10 | 0.025 | 0.40 |
| openai-o3 | 10.00 | 2.50 | 40.00 |
| openai-o4-mini | 1.10 | 0.275 | 4.40 |
cargo install --path .Ensure your Cargo bin directory (usually ~/.cargo/bin) is on your PATH:
export PATH="$HOME/.cargo/bin:$PATH"git clone https://github.com/yourusername/tcount.git
cd tcount
cargo build --release
sudo cp target/release/tcount /usr/local/bin/
sudo chmod 755 /usr/local/bin/tcountUSAGE:
tcount [OPTIONS] [--file <FILE>] [<TEXT>]
ARGS:
<TEXT> Direct text to encode; conflicts with --file
OPTIONS:
-m, --model <MODEL> Model name or alias (default: "latest")
-f, --file <FILE> Read input from a file
-r, --estimate-output-ratio <R> Estimate output tokens as ratio of input (default: 1.0)
-l, --list List supported encodings and aliases, then exit
-h, --help Print help information
-V, --version Print version information
- Count tokens and estimate cost for a short string (Default model = gpt-4.1, output ratio = 1.0)
tcount "Hello, world!"Model: gpt-4.1
Pricing date: 2025-05-08
Input tokens: 4
Estimated output tokens: 4
Cost for input tokens: $0.000008
Cost for output tokens: $0.000032
Total estimated cost: $0.000040
Count tokens in a file, specify model and custom output ratio
- Count tokens in a file, specify model and custom output ratio
tcount --file essay.txt --model gpt-4.1-mini --estimate-output-ratio 0.5Model: gpt-4.1-mini
Pricing date: 2025-05-08
Input tokens: 12 345
Estimated output tokens: 6 173
Cost for input tokens: $0.004938
Cost for output tokens: $0.009877
Total estimated cost: $0.014815
Read from stdin and use a reasoning model
- Read from stdin and use a reasoning model
cat transcript.txt | tcount --model openai-o4-mini --estimate-output-ratio 0.2Model: openai-o4-mini
Pricing date: 2025-05-08
Input tokens: 5 000
Estimated output tokens: 1 000
Cost for input tokens: $0.005500
Cost for output tokens: $0.004400
Total estimated cost: $0.009900
List all supported encodings and aliases
- List all supported encodings and aliases
tcount --listSupported encodings for tiktoken_rust:
cl100k_base
p50k_base
p50k_edit
…
Aliased models:
latest → gpt-4.1
latest-mini → gpt-4.1-mini
latest-nano → gpt-4.1-nano
-
Fork the repository.
-
Create a new branch for your feature or bugfix.
-
Open a pull request with a clear description of your changes.
This project is licensed under the MIT Licence. See LICENSE for details.