Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation benchmarks (lm-eval-harness) #2

Open
justheuristic opened this issue Jun 23, 2022 · 0 comments
Open

Evaluation benchmarks (lm-eval-harness) #2

justheuristic opened this issue Jun 23, 2022 · 0 comments

Comments

@justheuristic
Copy link

justheuristic commented Jun 23, 2022

Thanks for the awesome work! (and a especially for choosing to make it freely available)

If you have time, please also consider running the evaluation benchmarks from lm-eval-harness
https://github.com/EleutherAI/lm-evaluation-harness

[despite it having a ton of different benchmarks, you only need to implement one interface, and it runs all benchmarks for you]

It is a more-or-less standard tool for benchmarking how well does your model perform on a range of tasks (generation, common sense, math, etc)

There's a huge bunch of tasks, so if you want to choose some initial set, consider taking the ones that gpt-J reports here https://huggingface.co/EleutherAI/gpt-j-6B#evaluation-results

@justheuristic justheuristic changed the title Report evaluation benchmarks (lm-eval-harness) Evaluation benchmarks (lm-eval-harness) Jun 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant