Skip to content

commonbaseapp/benchmarks

Repository files navigation

benchmarks

This repo includes a performance analysis comparing Commonbase API to OpenAI's API. It includes:

  • a script to generate live benchmarks comparing Commonbase performance to OpenAI
  • benchmarks ran by us on Sep 14, 2023
  • a python notebook showing the analysis of the recorded benchbmarks.

Install dependencies:

bun install

in case you are less adventurous and don't yet have bun, here you go: Install Bun

Setup .env

Copy the .env.example file to .env and fill in the values.

Get COMMONBASE_API_KEY

Sign up on Commonbase and copy the API key from the onboarding flow.

Get COMMONBASE_PROJECT_ID

Copy the Project ID from Settings in the web app.

Get COMMONBASE_PROJECT_ID

Create a new OpenAI API key from OpenAI API keys

Run the benchmark

bun providers.ts > providers-$(date +%s).csv

Analysis

The results of our benchmarks and the analysis of the results can be found in the analysis folder. We ran two different benchmarks for the gpt-4 and gpt-3.5-turbo model.

  1. Single token completion (max_tokens=1)
  2. 128 tokens completion (max_tokens=128)

The used prompts can be found in the providers.ts file.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published