Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detailed Training setting #26

Closed
xiuzbl opened this issue Jul 11, 2023 · 1 comment
Closed

Detailed Training setting #26

xiuzbl opened this issue Jul 11, 2023 · 1 comment

Comments

@xiuzbl
Copy link

xiuzbl commented Jul 11, 2023

Hi, may you provide the detailed hyper-paramters when you training llama-13b? For example, how many and what kind of GPUs you use, what are the gradient accumulation steps and batch size per GPU? Moreover, when I directly use your deepspeed config setting to deepspeed-initialize a llama-7b on an 80G A100, the server reports CUDA OOM error.

Looking forward to your reply.

Thank you so much!

@xiuzbl xiuzbl closed this as completed Jul 13, 2023
@fahadh4ilyas
Copy link

Are you still get OOM when fine tuning? I kept getting it because of the size of the optimizer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants