Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different usage of eps between "A quick look at the algorithm" and the code #32

Closed
tatsuhiko-inoue opened this issue Dec 9, 2020 · 10 comments

Comments

@tatsuhiko-inoue
Copy link

Hi

I have a question.

In "A quick look at the algorithm" in README.md, eps is added to shat_t.
But in the code, eps is added to s_t(exp_avg_var) instead of shat_t.

Also only the code for pytorch, if amsgrad is True, eps is added to max_exp_avg_var instead of exp_arg_var(s_t) or shat_t.

Which behavior is correct?

pytorch code
tensorflow code

@juntang-zhuang
Copy link
Owner

juntang-zhuang commented Dec 9, 2020

Hi, thanks a lot for comment, please refer to the PyTorch code whenever there's a difference, because it's what I used for experiments.

In the Algorithm Part, when adding eps to s_t, it will finally pass to \hat{s_t} in the update, which is used in the denominator for update. I would take the code to be correct for now. (Also note that in the algo, eps is directly added to s_t before bias correction, which match the code. It might be more precise to think about whether adding eps before or after bias correction.)

Similarly for the max_exp_avg_var, the eps is added to max_exp_avg_var because it's the denominator, though adding to exp_arg_var(s_t) will finally pass to the denominator (I have not tested it yet). Also note that in our experiments we found ams_grad never helps for AdaBelief, so I never carefully tested ams_grad in following experiments.

PS: the implementation in Tensorflow is different from PyTorch, even for Adam. That's why they use different default eps.

@tatsuhiko-inoue
Copy link
Author

Thunk you for your reply.

I note whether eps is added to s_{t-1} of next update, not bias correction.

In "A quick look at the algorithm", eps is added to shat_t instead of s_t, so s_{t-1} of the next update have not been added eps.
In contrast, in the code for pytorch, eps is added exp_avg_var with Tensor#add_ method(in-place version).
Therefore, eps is added to exp_avg_var of the next update.

Here is the script that specifies inf for eps and checks if eps is added to exp_avg_ *:

import sys 
import torch 
import math 
from torch.optim import Adam 
from adabelief_pytorch import AdaBelief 
 
def test(opt_class, state_name, amsgrad=False): 
    param = torch.nn.Parameter(torch.ones([])) 
    optim = opt_class([param], lr=1, betas=(0.0, 0.999), eps=math.inf, amsgrad=amsgrad) 
 
    print(f"with {opt_class.__name__}, amsgrad = {amsgrad}", file=sys.stderr) 
    for i in range(3): 
        param.grad = torch.ones([]) 
        optim.step() 
 
        state = optim.state[param] 
        msg = f"grad = {param.grad}, {state_name} = {state[state_name]}" 
        if amsgrad: 
            msg += f", max_{state_name} = {state['max_'+state_name]}" 
        print(msg, file=sys.stderr) 
 
test(Adam,      'exp_avg_sq',  amsgrad=False) 
test(AdaBelief, 'exp_avg_var', amsgrad=False) 
 
test(Adam,      'exp_avg_sq',  amsgrad=True) 
test(AdaBelief, 'exp_avg_var', amsgrad=True) 

The stderr shown below.

with Adam, amsgrad = False
grad = 1.0, exp_avg_sq = 0.0010000000474974513
grad = 1.0, exp_avg_sq = 0.0019990000873804092
grad = 1.0, exp_avg_sq = 0.0029970011673867702
with AdaBelief, amsgrad = False
grad = 1.0, exp_avg_var = inf
grad = 1.0, exp_avg_var = inf
grad = 1.0, exp_avg_var = inf
with Adam, amsgrad = True
grad = 1.0, exp_avg_sq = 0.0010000000474974513, max_exp_avg_sq = 0.0010000000474974513
grad = 1.0, exp_avg_sq = 0.0019990000873804092, max_exp_avg_sq = 0.0019990000873804092
grad = 1.0, exp_avg_sq = 0.0029970011673867702, max_exp_avg_sq = 0.0029970011673867702
with AdaBelief, amsgrad = True
grad = 1.0, exp_avg_var = 0.0, max_exp_avg_var = inf
grad = 1.0, exp_avg_var = 0.0, max_exp_avg_var = inf
grad = 1.0, exp_avg_var = 0.0, max_exp_avg_var = inf

The following states have eps added.

  • exp_avg_var in AdaBelief(amsgrad=False)
  • max_exp_avg_var in AdaBelief(amsgrad=True)

No eps has been added to the following states:

  • exp_avg_sq, max_avg_sq in Adam
  • exp_avg_var in AdaBelief(amsgrad=True)

I used the following version.

% pip freeze | grep torch==
adabelief-pytorch==0.1.0
torch==1.7.0

PS: I didn't know the Adam implementation in Tensorflow is different from PyTorch. Thank you.

@juntang-zhuang
Copy link
Owner

juntang-zhuang commented Dec 10, 2020

Thanks for clarification, please take the code with inplace operation as the correct one, since I have not tested the non-inplace operation version. This does not affect the theoretical analysis though, since in theory we only assume the denominator is lower bounded, but I'm not sure what's the difference in practice.

@tatsuhiko-inoue
Copy link
Author

Thunk you for your reply.

I got it. I will use inplace operation version.

However I think that inplace operation version amsgrad is broken.
eps is added to max_exp_avg_var every step where max_exp_avg_var is not updated by the current gradient.
So max_exp_avg_var keeps increasing.

@juntang-zhuang
Copy link
Owner

Thanks for the suggestion, this could be the reason why amsgrad never helps in my experiments, I'll try to fix it and see if the new version of amsgrad helps.

@juntang-zhuang
Copy link
Owner

@tatsuhiko-inoue We have updated the amsgrad version in adabelief-pytorch==0.2.0 and adabelief-tf==0.2.0. The eps is added to exp_avg_var with in-place operation, then max_exp_avg_var is the element-wise maximum with exp_avg_var. It behaves similarly to the AdaBelief version without amsgrad. Hopefully this could somehow help resolve the issue.

@tatsuhiko-inoue
Copy link
Author

I confirmed this fix. Thank you!

Will be you change "A quick look at the algorithm" in README.md to match the pytorch code?

@juntang-zhuang
Copy link
Owner

Yeah, we will fix the readme and the paper.

@juntang-zhuang
Copy link
Owner

Fixed.

@tatsuhiko-inoue
Copy link
Author

I confirmed README.md. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants