Skip to content

codeaudit/zeno-evals

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Zeno 🤝 OpenAI Evals

Use Zeno to visualize the results of OpenAI Evals.

openai-evals-zeno.mov

Example using zeno-evals to explore the results of an OpenAI eval on multiple choice medicine questions (MedMCQA)

Usage

pip install zeno-evals

Run an evaluation following the evals instructions. This will produce a cache file in /tmp/evallogs/.

Pass this file to the zeno-evals command:

zeno-evals /tmp/evallogs/my_eval_cache.jsonl

Example

Single example looking at US tort law questions:

zeno-evals ./examples/example.jsonl

And an example of comparison between two models:

zeno-evals ./examples/crossword-turbo.jsonl --second-results-file ./examples/crossword-turbo-0301.jsonl

And lastly, we can pass additional Zeno functions to provide more context to the results:

pip install wordfreq
zeno-evals ./examples/crossword-turbo.jsonl --second-results-file ./examples/crossword-turbo-0301.jsonl --functions_file ./examples/crossword_fns.py

About

Visualize OpenAI Evals with Zeno

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%