-
Notifications
You must be signed in to change notification settings - Fork 614
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
discuss the solutions to Not fully recovering spikes #32
Comments
Am I understanding that you'd like to keep this issue open for some time and invite a community discussion, correct? Usually in this kind of situation we would roll back to a checkpoint right before the spike (but some 10-100 steps back) and then either try a new data (if bad data was the trigger) or lower the lr. And observe the grad norm while you're at it, since most likely your grads are spikes. Publicly available training LLM/VLM logbooks is a good source of learning of how such problems were dealt with by others. |
yeah! Or close it when we have systematic solutions/ debugging guidances to this problem. I feel like the issue section is a great place for ppl to talk about their experiences dealing with model training at scale. Thanks for the guidance! Will share my experiences later :) |
Hi @stas00 A random question. What is your principle for tuning the learning rate? The larger the better? My intuition is using the largest lr to make the learning phrase converge faster. |
@pengzhangzhi, it is probably better if we use the github Discussion feature for what you intend and leave Issues for what they are. I have just enabled it here https://github.com/stas00/ml-engineering/discussions - so you can start a discussion there moving your comments so far and then we can let other people know that they can join and share their insights? What do you think? I have never used that feature so hopefully it's easy to use - please let me know if you run into any issues |
Thanks!! @stas00 |
but you're starting a discussion, yes? We can discuss your questions there, then and invite others to contribute. That'd be much more productive than just you and I talking. |
Not fully recovering spikes are quite common in model training at scale. Like what I have:
![image](https://private-user-images.githubusercontent.com/59241275/310570367-4336b2ee-611e-403a-a808-4a9b4b510597.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjEzOTg2NDcsIm5iZiI6MTcyMTM5ODM0NywicGF0aCI6Ii81OTI0MTI3NS8zMTA1NzAzNjctNDMzNmIyZWUtNjExZS00MDNhLWE4MDgtNGE5YjRiNTEwNTk3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE5VDE0MTIyN1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWE1OWRkZjg2YzU5MTdkYjI4NzE1NzBhZDA3MzA0OWI4ZTUyMzc3MDkyZTc1ZjE1N2RjOGJjMDZkN2ZlMjgxNzMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Fp6rDlamcrfDJnHHO2fTLzmtB6IKv0ertb0uxSanCpI)
It's worth discussing the causes and potential solutions to that.
My general guesses are:
Welcome everyone to talk about your experiences dealing with the spikes...
Reference:
https://github.com/stas00/ml-engineering/blob/master/training/instabilities/training-loss-patterns.md
The text was updated successfully, but these errors were encountered: