Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

discuss the solutions to Not fully recovering spikes #32

Closed
pengzhangzhi opened this issue Mar 6, 2024 · 7 comments
Closed

discuss the solutions to Not fully recovering spikes #32

pengzhangzhi opened this issue Mar 6, 2024 · 7 comments

Comments

@pengzhangzhi
Copy link

pengzhangzhi commented Mar 6, 2024

Not fully recovering spikes are quite common in model training at scale. Like what I have:
image
It's worth discussing the causes and potential solutions to that.
My general guesses are:

  • some of the toxic data points at certain iterations
  • The learning rate.

Welcome everyone to talk about your experiences dealing with the spikes...

Reference:

https://github.com/stas00/ml-engineering/blob/master/training/instabilities/training-loss-patterns.md

@stas00
Copy link
Owner

stas00 commented Mar 6, 2024

Am I understanding that you'd like to keep this issue open for some time and invite a community discussion, correct?


Usually in this kind of situation we would roll back to a checkpoint right before the spike (but some 10-100 steps back) and then either try a new data (if bad data was the trigger) or lower the lr. And observe the grad norm while you're at it, since most likely your grads are spikes.

Publicly available training LLM/VLM logbooks is a good source of learning of how such problems were dealt with by others.

@pengzhangzhi
Copy link
Author

pengzhangzhi commented Mar 6, 2024

yeah! Or close it when we have systematic solutions/ debugging guidances to this problem. I feel like the issue section is a great place for ppl to talk about their experiences dealing with model training at scale.

Thanks for the guidance! Will share my experiences later :)

@pengzhangzhi
Copy link
Author

My solutions are:

  1. increase the batch size
  2. reduce the lr

However, I think the problem is somewhere else. I am sampling training data based on sequence length, i.e., a batch of sequences with similar lengths are sampled together. Thus the reason why the loss spikes is it switches to train on longer sequence batches.
image

@pengzhangzhi
Copy link
Author

pengzhangzhi commented Mar 7, 2024

Hi @stas00 A random question. What is your principle for tuning the learning rate? The larger the better? My intuition is using the largest lr to make the learning phrase converge faster.

@stas00
Copy link
Owner

stas00 commented Mar 8, 2024

@pengzhangzhi, it is probably better if we use the github Discussion feature for what you intend and leave Issues for what they are.

I have just enabled it here https://github.com/stas00/ml-engineering/discussions - so you can start a discussion there moving your comments so far and then we can let other people know that they can join and share their insights?

What do you think?

I have never used that feature so hopefully it's easy to use - please let me know if you run into any issues

@pengzhangzhi
Copy link
Author

Thanks!! @stas00

@stas00
Copy link
Owner

stas00 commented Mar 8, 2024

but you're starting a discussion, yes? We can discuss your questions there, then and invite others to contribute. That'd be much more productive than just you and I talking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants