Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added optional initial_lr parameter to LearningRateScheduleCallback and updated docs #1933

Merged
merged 4 commits into from
May 4, 2020

Conversation

tgaddair
Copy link
Collaborator

@tgaddair tgaddair commented May 3, 2020

Fixes #1921.

Adds deprecation warnings with removed versions.

Fixes docs for SyncBatchNorm.

Signed-off-by: Travis Addair taddair@uber.com

…nd updated docs

Signed-off-by: Travis Addair <taddair@uber.com>
@tgaddair tgaddair requested review from romerojosh and alsrgv May 3, 2020 17:22
Signed-off-by: Travis Addair <taddair@uber.com>
@@ -147,10 +147,11 @@
hvd.callbacks.LearningRateWarmupCallback(warmup_epochs=args.warmup_epochs, verbose=verbose),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pass initial_lr to LearningRateWarmupCallback as well?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Fixed.

Signed-off-by: Travis Addair <taddair@uber.com>
Signed-off-by: Travis Addair <taddair@uber.com>
@@ -99,9 +99,14 @@ def __init__(self, multiplier, start_epoch=0, end_epoch=None, staircase=True,
steps_per_epoch: The callback will attempt to autodetect number of batches per
epoch with Keras >= 2.0.0. Provide this value if you have an older
version of Keras.
initial_lr: Initial learning rate at the start of training.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the doc above for this callback, could we explicitly mention that when using a constant multiplier, it applies to the initial_lr and not the current learning rate? (because different behavior than TF Keras 2.2 callback which takes current lr)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same for TF Keras file

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nevermind, I missed the description for class at

LearningRateScheduleCallback sets learning rate between epochs `start_epoch` and

Copy link
Collaborator

@nvcastet nvcastet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks good!

@@ -99,9 +99,14 @@ def __init__(self, multiplier, start_epoch=0, end_epoch=None, staircase=True,
steps_per_epoch: The callback will attempt to autodetect number of batches per
epoch with Keras >= 2.0.0. Provide this value if you have an older
version of Keras.
initial_lr: Initial learning rate at the start of training.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nevermind, I missed the description for class at

LearningRateScheduleCallback sets learning rate between epochs `start_epoch` and

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Keras LR callbacks have unintended behavior when resuming from checkpoint
3 participants