Skip to content

chitresh99/Richter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Richter

Richter is a LLM benchmarking tool

Features

Test multiple LLM models simultaneously

Measure response times, token usage, and success rates

Support for concurrent and sequential execution

Export results to JSON or CSV format

Real-time progress tracking with ETA

Graceful error handling and recovery

Prerequisites

Quick Start

  1. Set your API key:
export OPENROUTER_API_KEY="your-api-key-here"
  1. Test connectivity:
go run . --test
  1. Run a basic benchmark:
go run . 

Usage Examples

# Simple benchmark with default model
go run . 

# Multiple models
go run . --models "openai/gpt-oss-20b:free,mistralai/mistral-small-3.2-24b-instruct:free"

# Custom prompts
go run . --prompts "Hello,How are you?,Tell me a joke"

# Multiple iterations for reliability
go run . --iterations 3

# Concurrent testing
go run . --concurrent 3 --iterations 2

# Export to CSV
go run . --export csv --output my_results.csv

About

A CLI tool for benchmarking LLMs.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages