Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RM training loss becomes NAN when finish the first training step. #288

Open
lixsh6 opened this issue May 11, 2024 · 1 comment
Open

RM training loss becomes NAN when finish the first training step. #288

lixsh6 opened this issue May 11, 2024 · 1 comment

Comments

@lixsh6
Copy link

lixsh6 commented May 11, 2024

I used a large model (> 170B) as my reward model. In the very beginning, loss is normal. But when training one step, the loss becomes NAN. This situation didn't happen when I used a smaller base model (e.g., 30B) to train RM. Do you have any suggestions about this?

@hijkzzz
Copy link
Collaborator

hijkzzz commented May 11, 2024

We don't have such a big model, maybe it is related to DeepSpeed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants