New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for ReduceLROnPlateau #98
Comments
I think we can directly adjust lr in the optimizer as keras, which means we don't need ReduceLROnPlateau as a metric. Specifically, perhaps we can consider adding a hook in |
@lr1d good point. you can just adjust the LR in the current callback also (optimizer_step). or in any of the other callbacks |
closing because this should be implicitly supported since we can pass a ReduceLROnPlateau object... nothing we need to do, this is standard PyTorch functionality |
Please, I can't follow. Why the issue is closed? Do you suggest using some hooks like |
I’m not sure either how to pass the ReduceLROnPlateau object as it needs the metric argument as pointed out by @Ir1d. @williamFalcon would it be possible that you give an example of how to use this scheduler with pytorch lightning? |
It feels rather dirty to me, but you can save the loss in your
Edit: This will not work as intended, since the optimizer step is called after every batch. |
we could modify the scheduler step to take in the loss when it needs it? i have to look at this more carefully though |
That would be the best solution indeed, but then you'd need a way to figure out which ones need the loss when calling step. I'm not sure how to do that at this moment. For now, I solved it as follows, which may help people until there is a better solution:
|
Also, at this moment https://github.com/williamFalcon/pytorch-lightning/blob/master/pytorch_lightning/trainer/trainer.py#L958 lr_scheduler.step() is called at the end of the epoch, while some scheduler (eg: https://pytorch.org/docs/master/optim.html#torch.optim.lr_scheduler.OneCycleLR) should be called at the end of the batch |
From PT docs.
|
Either way, @kvhooreb i added the |
Would you consider providing (or point me to an example) a simple but complete example of using ReduceLROnPlateau. Thanks, Lars |
Yes, a simple example would be great please. |
Is your feature request related to a problem? Please describe.
As of now it does not seem like it is possible to use ReduceLROnPlateau as a metric has to be passed to the step method of the lr_scheduler.
Describe the solution you'd like
A possibility to use ReduceLROnPlateau on some or any of the metrics calculated during training or validation.
Describe alternatives you've considered
In my use case I want to do the step based on a metric calculated on the validation set. As a workaround I define the lr_scheduler in the init of the model and perform the step in the validation_end function
The text was updated successfully, but these errors were encountered: