We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We currently output evaluation results as table artifacts which gives us all the data but not as easy of a visual glance at evals.
We'd like to add these results to the table summary, which
The text was updated successfully, but these errors were encountered:
https://github.com/EleutherAI/lm-evaluation-harness/blob/40f2d19fc1c2b2c39313ad62697009272a758f3d/lm_eval/logging_utils.py#L205
https://github.com/EleutherAI/lm-evaluation-harness/blob/40f2d19fc1c2b2c39313ad62697009272a758f3d/lm_eval/logging_utils.py#L327
Sorry, something went wrong.
veekaybee
No branches or pull requests
We currently output evaluation results as table artifacts which gives us all the data but not as easy of a visual glance at evals.
We'd like to add these results to the table summary, which
The text was updated successfully, but these errors were encountered: