-
Notifications
You must be signed in to change notification settings - Fork 641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
performance benchmark #17
Comments
Hi, do you mean adding the performance results directly to the table? For now, we have listed some external resources under the Evals on open LLMs section, which cover the performances of the models on various baselines. Do you think this is enough? |
Yes, that would give some rough idea which model to choose among many. I will have a look at the Evals, thanks. Beside performance, personally, also GPU/tech requirement would also be interesting benchmark to estimate solution. If I propose, an LLM-based solution, what's the min. tech requirement would be for training/fine-tuning/inference. so far many models are coming in and out, but I haven't found any certain data. For example, X model need atleast Y gb gpu ram for inference. |
@touhi99 Great points you are mentioning, thanks for that! Here are some remarks from my side to further keep the discussion going and find a suitable spot to add your requested information: Adding evals results right inside the table
GPU memory requirements
EDIT: This is a little naive, as one also needs to account for
Taking above into account, we can get a very naive estimate for fine-tuning with: So for our 7B model above:
|
For anyone interested in this topic: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard |
Hi,
are there possibility to add a performance benchmark of the open-sourced LLMs?
The text was updated successfully, but these errors were encountered: