New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
keras ReduceLROnPlateau, is this a bug ? #10924
Comments
Look at this mock example:
From https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/ The way I see this working is: The common approach I've seen people taken is to make the stop callback using patience = 0 , min_delta = small value. PS: I don't know if it was in purpose but you are not monitoring the same values in both callbacks. Hope I helped.
|
In your example, learning rate is reduced when loss is NOT improving, In my example, it reduced even when val_loss is improving. |
I've looked into this as well, in my case it seems that after the initial patience period has passed, I've seen a similar issue on this but it was marked closed sometime around 2016. seems the problem is still here... These are some of the outputs of my code (not attaching everything because I'm using a patience of 25 (yes, I know that really high) so it'll be a lot of text, but this shows the gist of it... Epoch 00173: ReduceLROnPlateau reducing learning rate to 0.0006249999860301614. Epoch 00190: val_loss improved from 0.00341 to 0.00341, saving model to ../data/comparison/2018_09_05_1532/Meta_inds_[1]_weights.hdf5 Epoch 00198: ReduceLROnPlateau reducing learning rate to 0.0003124999930150807. Epoch 00213: val_loss improved from 0.00340 to 0.00340, saving model to ../data/comparison/2018_09_05_1532/Meta_inds_[1]_weights.hdf5 Epoch 00223: ReduceLROnPlateau reducing learning rate to 0.00015624999650754035. notice that ReduceLROnPlateau reduces learning rate every 25 epochs which the patience parameter, regardless of improvement in loss |
Any progress on this? |
Try to set explicitly mode='min' |
Having similar issue again. Probably the number of epoch without improvement are summed together, even if they are not consecutive. |
I am training a keras sequential model. I want the learning rate to be reduced when training is not progressing.
I use ReduceLROnPlateau callback.
After first 2 epoch with out progress, the learning rate is reduced as expected. But then its reduced every 2 epoch's, causing the training to stop progressing.
Is that a keras bug ? or I use the function the wrong way ?
The code:
The output:
The text was updated successfully, but these errors were encountered: