Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.1.0 changes for ranger_adabelief #19

Closed
bratao opened this issue Oct 26, 2020 · 6 comments
Closed

0.1.0 changes for ranger_adabelief #19

bratao opened this issue Oct 26, 2020 · 6 comments

Comments

@bratao
Copy link

bratao commented Oct 26, 2020

Hi @juntang-zhuang , super excited to try the new improvements.
I saw that you did not updated the ranger version. Do you plan to add the improvements there too?

@juntang-zhuang
Copy link
Owner

juntang-zhuang commented Oct 26, 2020

Hi @bratao, thanks for asking. Currently don't have a plan for a big change with the ranger version, will fix some small errors. I don't have much experience with ranger version, also the decoupled weight decay and rectification is turned on by default in ranger (these two defaults are modified in adabelief-pytorch), except the eps is set as 1e-5 in ranger. Do you have a feeling what is a good default value for eps in ranger-adabelief? Or any other ideas on potential improvements?

@dvolgyes
Copy link

Hi,

I would recommend setting the default value of 'weight decoupling' to true.
Adam originally had coupled weight decay, but after this paper,
https://arxiv.org/abs/1711.05101 e.g. Pytorch introduced AdamW.
But there is no supporting evidence for any case where coupled weight
decay outperforms decoupled one, so even the default pytorch
changed to decoupled. Actually, not just changed the default,
they completely removed the option for the old one, making it backward incompatible.
(So yes, now Adam and AdamW is almost the same in Pytorch,
but in Adam the default decay is 0.)

If you know any counterexamples where coupling helps, please, show a link,
but otherwise i strongly recommend the default to be set to True.
(And obviously, for comparison the same weight decay scheme should be used.)

@juntang-zhuang
Copy link
Owner

@dvolgyes Thanks for feedback, actually for all latest implementations the default for weight_decouple is True. BTW, since you mentioned decoupled decay is enabled in Adam in PyTorch, I checked the source code it seems Adam does not have much change, am I missing something? Could you specify where in Adam of PyTorch is decoupled weight decay enabled? Thanks a lot

@dvolgyes
Copy link

Hi,

It depend how you see it. Check out the documentation of Adam between version 1.5.1 and 1.6:
https://pytorch.org/docs/1.5.1/optim.html?highlight=adam#torch.optim.Adam
https://pytorch.org/docs/1.6.0/optim.html?highlight=adam#torch.optim.Adam

The old one was:
"Implements Adam algorithm.
It has been proposed in Adam: A Method for Stochastic Optimization."

The new one:
"Implements Adam algorithm.
It has been proposed in Adam: A Method for Stochastic Optimization. The implementation of the L2 penalty follows changes proposed in Decoupled Weight Decay Regularization."

I did not go into the details checking how they implemented it, but I should have.
Turned out, it is quite likely that the documentation is wrong, it suggests they updated it,
but they didn't, and they still use the old version without weight decoupling.
pytorch/pytorch#42843

I still hold that I haven't seen any paper where the weight decoupling shows worse performance than the non-decoupled
version, but I take back my Pytorch suggestion, I was mislead by the documentation.

So my current view:

  • Adam uses coupled weight decay
  • the Adam documentation is wrong
  • weight decoupling seems to be beneficial for all, I haven't seen counter-evidence

@juntang-zhuang
Copy link
Owner

juntang-zhuang commented Nov 26, 2020

@dvolgyes Thanks a lot. It seems weird that the "source code" page for Adam in PyTorch 1,6 is

if group['weight_decay'] != 0:
       grad = grad.add(p, alpha=group['weight_decay'])

Then it's not decoupled weight decay. Quite weird, I guess the document page is wrong. But thanks for suggestion, the default weight_decouple is turned on.

@dvolgyes
Copy link

"Code never lies, comments sometimes do." -- Ron Jeffries
:)
Even worse, the bug was discovered before 1.7, but it might be that even 1.7.1 will be released without it.
Anyway, you code was always correct, but I am glad that I could convince you to change the default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants