New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
print
fix in lr_scheduler
#68338
print
fix in lr_scheduler
#68338
Conversation
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow For more information, please take a look at the CI Flow Wiki. |
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 9b0b502 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @albanD should we fix CosineAnnealingWarmRestarts
to use integer epoch
instead?
pytorch/torch/optim/lr_scheduler.py
Line 1300 in 94b6fa6
self.last_epoch = math.floor(epoch) |
@mrshenli Float is used in compute logic and wouldn't make sense to print the same Maybe override |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Float is used in compute logic
From checking the code, I'm not sure to see where the value of epoch
actually needs to be a float? It seems that we only ever increment it by 1 no?
torch/optim/lr_scheduler.py
Outdated
@@ -110,7 +110,7 @@ def print_lr(self, is_verbose, group, lr, epoch=None): | |||
print('Adjusting learning rate' | |||
' of group {} to {:.4e}.'.format(group, lr)) | |||
else: | |||
print('Epoch {:5d}: adjusting learning rate' | |||
print('Epoch {:5f}: adjusting learning rate' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would lead to many digits after the .
in all cases. The print would look weird I think a bit un-expected I think.
Cosine LR can restart every epoch, so intermediate values require scheduler.step(epoch + i / iters) Updated code. |
torch/optim/lr_scheduler.py
Outdated
@@ -929,7 +931,7 @@ def _reduce_lr(self, epoch): | |||
if old_lr - new_lr > self.eps: | |||
param_group['lr'] = new_lr | |||
if self.verbose: | |||
print('Epoch {:5d}: reducing learning rate' | |||
print('Epoch {:5f}: reducing learning rate' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix! This one needs to be updated the same way right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Looks good! Can you rebase on top of master to make sure the CI runs please? |
I'm not familiar with quick rebasing, can reopen PR if needed. |
Ok, let me do it then. |
btw this is what I did:
|
It's what I had in mind but the repos I've tried it on gave a lot of trouble - guess PyTorch is much better structured. Thanks |
@albanD has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
No worries. I'll land this now. |
To be clear, are contributions still credited? |
{:5d}
fails forCosineAnnealingWarmRestarts
which has floatepoch