New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hypergradient Descent #6
Comments
I did the combination, can share upon your request. |
I would be interested :-) |
@uyekt This code combines techniques from three papers (including the one from this repository) https://arxiv.org/abs/1806.06763 It could be great in my opinion if the partial parameter can be updated on the fly similar to what have been done with the learning rate here, as there is also previous research suggesting switching from Adam to SGD during training for improving the generalization: https://arxiv.org/abs/1712.07628
|
@akaniklaus @uyekt I've done some experiments with hypergradient and for me it behaved pretty much like a simple linear lr decay with small hypergrad_lr value. With large hypergrad_lr values it just added redundant stochasticity to the training process. Was your experience different and if it was then what problem/data/batch size/network architecture did you try it with? |
@mpyrozhok Hello, it is quite normal that it works similar to a decayed learning rate schedule. I believe the best functionality is that you do not have to re-optimize max and min learning rate of such decaying schedule each time you change something e.g. batch-size, architecture, etc. This is quite something especially when you are hypertuning (as one would need to re-optimize both initial and min learning-rate of a scheduler for each configuration and thus updating it online saves lots of resources). Furthermore, I am generally first making few trial runs to decide on a initial learning rate to start with (I start with a low learning-rate and then as it first increases, I use the peak point where it starts to drop again as the initial learning rate of my actual run). As for hypergrad_lr, the paper suggests 1e-5 and 1e-4, I experienced that the default value in the code (1e-8) is too low and it causes adapting too slowly; whilst a higher value sometimes is causing de-convergence from the minima (and even sometimes negative learning rate!). @mpyrozhok Do you have any idea how we can change the code to use the validation loss gradient? Maybe the warm restart can also be implemented, what would be the best way to do it automatically? |
Thank you for sharing this. Would it be possible if you can also integrate Hypergradient Descent technique into your AdamW implementation? It reduces the necessity of hypertuning the initial learning rate. https://github.com/gbaydin/hypergradient-descent
I have also read lots of criticism about AmsGrad and haven't been able to yet get any improvement with that variant. Can I please learn your thoughts about that? FYI, two other techniques that I am currently experimenting with are Padam and QHAdam.
The text was updated successfully, but these errors were encountered: