-
Notifications
You must be signed in to change notification settings - Fork 22.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Warm restart policy is available now #6130
Conversation
AutuanLiu
commented
Mar 30, 2018
- Please tell me if violated any rules.
Can you also add tests for this in |
@pytorchbot test this please |
ok |
@pytorchbot test this please |
@pytorchbot test this please |
@pytorchbot test this please |
torch/optim/lr_scheduler.py
Outdated
(1 + math.cos(math.pi * self.last_epoch / self.T_max)) / 2 | ||
if self.restart and self.last_epoch == self.T_max: | ||
self.last_epoch = 0 | ||
self.T_max *= self.T_mult |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
* actually, the restart argument is redundant, because T_max will equal to training epochs when warm restart policy not be used. * if we want to apply warm restart policy, we need to set T_max < training epochs.
torch/optim/lr_scheduler.py
Outdated
|
||
Args: | ||
optimizer (Optimizer): Wrapped optimizer. | ||
T_max (int): Maximum number of iterations. | ||
eta_min (float): Minimum learning rate. Default: 0. | ||
T_mult (int): Multiplicative factor of T_max. Default: 2 | ||
restart (bool): If True, warm restart policy will be used. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
test/test_optim.py
Outdated
single_targets = [eta_min + (0.05 - eta_min) * (1 + math.cos(math.pi * x / y)) / 2 | ||
for x, y in zip(T_cur, T_i)] | ||
targets = [single_targets, list(map(lambda x: x * epochs, single_targets))] | ||
scheduler = CosineAnnealingLR(self.opt, T_max=T_max, eta_min=eta_min, T_mult=T_mult, restart=True) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
@pytorchbot add to whitelist |
@ssnl Do you think this is OK to merge now? |
@ezyang it's not. See our discussion above, which hasn't concluded yet. |
self.cycle += 1 | ||
else: | ||
self.cycle = int(math.floor(math.log(epoch / self.T_max * (self.T_mult - 1) + 1, self.T_mult))) | ||
epoch -= sum([self.T_max * self.T_mult ** x for x in range(self.cycle)]) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
@AutuanLiu Let us know if you have time to address @apaszke and @ssnl 's comments, thanks! |
@yf225 I'm so sorry, I have no time to address and figure out these comments. |
Sorry for not providing the actual working code, but our implementation of restarts for TensorFlow might be useful as a reference: |
Looking forward to seeing this function! |
I gave it a shot at porting the TF one, but I am not sure if it is correct since I just started using PyTorch. Could you please give me your two cents?
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Hi @AutuanLiu! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |