Skip to content

BenchGecko/benchgecko-go

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

benchgecko-go

Go client for BenchGecko, the AI model data platform. Look up benchmarks, compare models head-to-head, explore provider pricing, and estimate inference costs with an idiomatic Go API and zero dependencies.

BenchGecko tracks 414 models across 55 providers and 40 benchmarks, giving developers and teams the data they need to pick the right model for every task.

Installation

go get github.com/BenchGecko/benchgecko-go

Quick Start

package main

import (
    "fmt"
    bg "github.com/BenchGecko/benchgecko-go"
)

func main() {
    // Look up a model
    model, ok := bg.GetModel("claude-3-5-sonnet")
    if ok {
        fmt.Println(model.Name)               // Claude 3.5 Sonnet
        fmt.Println(model.Provider)            // Anthropic
        fmt.Println(model.Benchmarks["mmlu"])  // 88.7
    }

    // Estimate cost
    cost, _ := bg.EstimateCost("gpt-4o", 4000, 1000)
    fmt.Printf("Total: $%.4f\n", cost.TotalCost) // Total: $0.0200
}

API Reference

GetModel(slug) (Model, bool)

Returns the full data struct for a model, including benchmark scores and pricing. The boolean indicates whether the model was found.

m, ok := bg.GetModel("gpt-4o")
if ok {
    fmt.Printf("%s (%s) - %d token context\n", m.Name, m.Provider, m.ContextWindow)
    fmt.Printf("MMLU: %.1f, HumanEval: %.1f\n", m.Benchmarks["mmlu"], m.Benchmarks["humaneval"])
}

CompareModels(slugA, slugB) (Comparison, error)

Side-by-side comparison across every tracked benchmark, plus a pricing cost ratio. Useful for building comparison tables or automated model selection.

cmp, err := bg.CompareModels("gpt-4o", "claude-3-5-sonnet")
if err == nil {
    he := cmp.Benchmarks["humaneval"]
    fmt.Printf("HumanEval: %.1f vs %.1f (winner: %s)\n", *he.A, *he.B, he.Winner)
    fmt.Printf("Cheaper: %s (ratio: %.3f)\n", cmp.Pricing.CheaperModel, cmp.Pricing.CostRatio)
}

GetPricing(provider) ([]ProviderPricing, error)

Lists every model from a provider with input/output pricing per million tokens and context window size.

items, err := bg.GetPricing("anthropic")
if err == nil {
    for _, p := range items {
        fmt.Printf("%s: $%.2f/M in, $%.2f/M out\n", p.Name, p.InputPricePer1M, p.OutputPricePer1M)
    }
}
// Claude 3.5 Sonnet: $3.00/M in, $15.00/M out
// Claude 3 Haiku: $0.25/M in, $1.25/M out

ListBenchmarks() []Benchmark

Returns metadata for all tracked benchmarks: name, full name, description, and scoring scale.

for _, b := range bg.ListBenchmarks() {
    fmt.Printf("%s (%s): %s\n", b.Name, b.FullName, b.Description)
}

EstimateCost(model, inputTokens, outputTokens) (CostEstimate, error)

Calculates the USD cost for a single inference call broken down by input and output tokens.

est, err := bg.EstimateCost("deepseek-v3", 10000, 2000)
if err == nil {
    fmt.Printf("Input: $%.4f  Output: $%.4f  Total: $%.4f\n",
        est.InputCost, est.OutputCost, est.TotalCost)
}

ListModels() / ListProviders()

Convenience helpers that return sorted slices of all available model slugs and provider keys.

fmt.Println(bg.ListModels())
// [claude-3-5-sonnet claude-3-haiku command-r-plus deepseek-v3 ...]

fmt.Println(bg.ListProviders())
// [anthropic cohere deepseek google meta mistral openai]

Data Coverage

The bundled snapshot covers the most-used models from OpenAI, Anthropic, Google, Meta, Mistral, DeepSeek, and Cohere. For the full catalogue of 414 models, 55 providers, and 40 benchmarks, visit benchgecko.ai.

Pricing data and benchmark scores are updated with each module release. For real-time pricing, check the pricing page.

Requirements

Go 1.21 or later. No external dependencies.

License

MIT

About

Go client for BenchGecko - AI model data platform

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages