Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How long does single LLM's tunning reuqired? #262

Open
alphahumancoder opened this issue Apr 8, 2024 · 3 comments
Open

How long does single LLM's tunning reuqired? #262

alphahumancoder opened this issue Apr 8, 2024 · 3 comments

Comments

@alphahumancoder
Copy link

No description provided.

@alphahumancoder
Copy link
Author

I mean 7B and 13B models

@wuxibin89
Copy link
Collaborator

It depends on your datasets and GPUs, we have some benchmark for reference:
https://github.com/OpenLLMAI/OpenRLHF/tree/wuxibin/benchmark/benchmark

@hijkzzz
Copy link
Collaborator

hijkzzz commented Apr 18, 2024

It depends on your datasets and GPUs, we have some benchmark for reference: https://github.com/OpenLLMAI/OpenRLHF/tree/wuxibin/benchmark/benchmark

This performance data in this branch is not optimized and we will further adjust the configs in the official technical report

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants