Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

warmup_lr is computed incorrectly in step_ReduceLROnPlateau #19

Open
LvJC opened this issue Oct 28, 2020 · 8 comments
Open

warmup_lr is computed incorrectly in step_ReduceLROnPlateau #19

LvJC opened this issue Oct 28, 2020 · 8 comments

Comments

@LvJC
Copy link

LvJC commented Oct 28, 2020

I wonder whether you forgot to modify like the line shown below in:

warmup_lr = [base_lr * ((self.multiplier - 1.) * self.last_epoch / self.total_epoch + 1.) for base_lr in self.base_lrs]

+ warmup_lr = self.get_lr()
- warmup_lr = [base_lr * ((self.multiplier - 1.) * self.last_epoch / self.total_epoch + 1.) for base_lr in self.base_lrs]

Here is the details:

  1. When I use ReduceLROnPlateau as the after_scheduler of GradualWarmupScheduler, the warm-up failed. The way I get the learning rate is: optim.param_groups[0]['lr']. Then I use get_lr to get the learning rate, I found it is correct.
  2. I use StepLR as the after_scheduler, I found there was no exception and no error.

Therefor, I think the learning rate of the optimizer hadn't been warmed up correctly.

@jonashaag
Copy link

#18

@BCWang93
Copy link

Hi,how do you use ReduceLROnPlateau as the after_scheduler,when I use this as after_scheduler,the warm up in the first epoch is not change.

@LvJC
Copy link
Author

LvJC commented Jan 19, 2021

Hi,how do you use ReduceLROnPlateau as the after_scheduler,when I use this as after_scheduler,the warm up in the first epoch is not change.

https://github.com/LvJC/pytorch-gradual-warmup-lr

@BCWang93
Copy link

Hi,how do you use ReduceLROnPlateau as the after_scheduler,when I use this as after_scheduler,the warm up in the first epoch is not change.

https://github.com/LvJC/pytorch-gradual-warmup-lr

Hi,I had replace this line

  • warmup_lr = self.get_lr()
  • warmup_lr = [base_lr * ((self.multiplier - 1.) * self.last_epoch / self.total_epoch + 1.) for base_lr in self.base_lrs]
    I use the warm_up lr is 5e-5 and 1epoch, and the after_scheduler init lr is 5e-4.But When I running,I find the first epoch lr is also 5e-4,so,I think there have other error?Thanks!

@LvJC
Copy link
Author

LvJC commented Jan 19, 2021

Hi,how do you use ReduceLROnPlateau as the after_scheduler,when I use this as after_scheduler,the warm up in the first epoch is not change.

https://github.com/LvJC/pytorch-gradual-warmup-lr

Hi,I had replace this line

  • warmup_lr = self.get_lr()

  • warmup_lr = [base_lr * ((self.multiplier - 1.) * self.last_epoch / self.total_epoch + 1.) for base_lr in self.base_lrs]
    I use the warm_up lr is 5e-5 and 1epoch, and the after_scheduler init lr is 5e-4.But When I running,I find the first epoch lr is also 5e-4,so,I think there have other error?Thanks!

You can re-refer it, I updated it just now. I think the main cause is the get_last_lr function.

@BCWang93
Copy link

Hi,how do you use ReduceLROnPlateau as the after_scheduler,when I use this as after_scheduler,the warm up in the first epoch is not change.

https://github.com/LvJC/pytorch-gradual-warmup-lr

Hi,I had replace this line

  • warmup_lr = self.get_lr()
  • warmup_lr = [base_lr * ((self.multiplier - 1.) * self.last_epoch / self.total_epoch + 1.) for base_lr in self.base_lrs]
    I use the warm_up lr is 5e-5 and 1epoch, and the after_scheduler init lr is 5e-4.But When I running,I find the first epoch lr is also 5e-4,so,I think there have other error?Thanks!

You can re-refer it, I updated it just now. I think the main cause is the get_last_lr function.

Thanks!I will test it.

@BCWang93
Copy link

Hi,how do you use ReduceLROnPlateau as the after_scheduler,when I use this as after_scheduler,the warm up in the first epoch is not change.

https://github.com/LvJC/pytorch-gradual-warmup-lr

Hi,I had replace this line

  • warmup_lr = self.get_lr()
  • warmup_lr = [base_lr * ((self.multiplier - 1.) * self.last_epoch / self.total_epoch + 1.) for base_lr in self.base_lrs]
    I use the warm_up lr is 5e-5 and 1epoch, and the after_scheduler init lr is 5e-4.But When I running,I find the first epoch lr is also 5e-4,so,I think there have other error?Thanks!

You can re-refer it, I updated it just now. I think the main cause is the get_last_lr function.

Hi,I test this methods,but this has same problem,the first epochs lr is also 5e-4.But,when I change the after_scheduler to cosin_scheduler,the running is right.

@IamSVP94
Copy link

Hi there! Still do not work with ReduceLROnPlateau

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants