Skip to content

tokenr-co/tokenr-ruby

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

tokenr Ruby SDK

Automatic LLM cost tracking in a few lines of code.

Track costs from OpenAI, Anthropic, and other LLM providers with minimal code changes. Get real-time visibility into spending by agent, feature, team, or any dimension you need.

Gem Version

Features

  • Minimal setup — configure once, wrap your client, you're done
  • Async by default — batches requests in a background thread; never adds latency
  • Multi-provider — OpenAI and Anthropic today; manual tracking for anything else
  • Rich attribution — agent, feature, team, and custom tags per request
  • Production-ready — tracking failures are silent; your app always runs

Installation

# Gemfile
gem "tokenr-ruby"
bundle install

Or install directly:

gem install tokenr-ruby

The gem is named tokenr-ruby on RubyGems. Once installed, you require "tokenr" as normal.

Quickstart

OpenAI

require "openai"
require "tokenr"

Tokenr.configure do |c|
  c.api_key  = ENV["TOKENR_TOKEN"]
  c.agent_id = "my-app"           # optional default
end

client  = OpenAI::Client.new(access_token: ENV["OPENAI_API_KEY"])
tracked = Tokenr::Integrations::OpenAI.wrap(client)

response = tracked.chat(parameters: {
  model:    "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }]
})
# Cost is automatically tracked to Tokenr

Anthropic

require "anthropic"
require "tokenr"

Tokenr.configure do |c|
  c.api_key = ENV["TOKENR_TOKEN"]
end

client  = Anthropic::Client.new(api_key: ENV["ANTHROPIC_API_KEY"])
tracked = Tokenr::Integrations::Anthropic.wrap(client)

response = tracked.messages(
  model:      "claude-opus-4-5",
  max_tokens: 1024,
  messages:   [{ role: "user", content: "Hello!" }]
)
# Automatically tracked!

Configuration

Environment Variables

export TOKENR_TOKEN="your-token"
Tokenr.configure do |c|
  c.api_key = ENV["TOKENR_TOKEN"]  # or it's read automatically
end

All Options

Tokenr.configure do |c|
  c.api_key        = ENV["TOKENR_TOKEN"]     # required
  c.agent_id       = "my-app"                # default agent ID for all requests
  c.team_id        = nil                     # default team ID
  c.default_tags   = { environment: "prod" } # merged into every request
  c.async          = true                    # send in background (recommended)
  c.batch_size     = 100                     # flush after this many queued events
  c.flush_interval = 5                       # flush every N seconds
end

Disable in Development

Tokenr.configure do |c|
  c.api_key = ENV["TOKENR_TOKEN"]
  c.async   = ENV["RAILS_ENV"] == "production"
end

Advanced Usage

Track by Agent

# Option 1: default at configure time
Tokenr.configure { |c| c.agent_id = "support-bot" }

# Option 2: per-wrapper
tracked = Tokenr::Integrations::OpenAI.wrap(client, agent_id: "sales-bot")

Track by Feature

tracked = Tokenr::Integrations::OpenAI.wrap(client,
  agent_id:     "support-bot",
  feature_name: "ticket-summary"
)

Multi-Tenant Tracking

# Wrap with a team_id to roll up costs per customer/team
def ai_client_for(team)
  Tokenr::Integrations::OpenAI.wrap(
    base_client,
    agent_id: "shared-bot",
    tags:     { team_id: team.id, plan: team.plan }
  )
end

Custom Tags

tracked = Tokenr::Integrations::Anthropic.wrap(client,
  tags: { customer_id: "cust_123", language: "es" }
)

Manual Tracking

For providers without a built-in integration, or when you want explicit control:

Tokenr.track(
  provider:      "cohere",
  model:         "command-r-plus",
  input_tokens:  1200,
  output_tokens: 400,
  agent_id:      "research-bot",
  feature_name:  "summarization",
  latency_ms:    320
)

Querying Costs

# Costs for the last 7 days
Tokenr.costs(start_date: 7.days.ago.iso8601, end_date: Time.now.iso8601)

# Grouped by agent
Tokenr.client.get_costs_by_agent(limit: 20)

# Time-series
Tokenr.client.get_timeseries(interval: "day")

How It Works

  1. Tokenr::Integrations::OpenAI.wrap(client) returns a thin wrapper around your existing client
  2. After each call the wrapper reads token counts from the response usage field
  3. Events are pushed onto an in-process queue and flushed to Tokenr in the background
  4. If tracking fails for any reason, the exception is swallowed — your app is unaffected
  5. On process exit, at_exit flushes any remaining queued events

Supported Providers

Provider Auto-Tracking Manual Tracking
OpenAI Yes Yes
Anthropic Yes Yes
Cohere Coming soon Yes
Custom Yes

Getting Your API Token

  1. Sign up at tokenr.co
  2. Go to API Tokens and create a token
  3. Copy it — shown only once
export TOKENR_TOKEN="your-token-here"

Security

This SDK is open source so you can audit exactly what data is sent and when. The short version:

  • Only token counts, model names, and your attribution metadata are transmitted
  • No prompt content or response content ever leaves your application
  • All requests use HTTPS
  • Tracking runs on a background thread and cannot block your main thread

License

MIT — see LICENSE.txt

Support

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages