Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lighting wrapped optimizers do not allow return values #5235

Closed
thoglu opened this issue Dec 22, 2020 · 7 comments · Fixed by #11711
Closed

Lighting wrapped optimizers do not allow return values #5235

thoglu opened this issue Dec 22, 2020 · 7 comments · Fixed by #11711
Labels
feature Is an improvement or enhancement optimizer
Milestone

Comments

@thoglu
Copy link

thoglu commented Dec 22, 2020

This is both a bug report /feature request:

Any optimizer.step function might have a return value. However, NO return values are currently handled by the optimizer.LightningOptimizer class in the "step" function. Optimizers with return values in their step function therefore do not operate properly.

BugFix: Add return value to to the respective line, something like:

return_val=None
if make_optimizer_step:
      return_val=self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)
....

return return_val

Also the __optimizer_step function must be able to pass on the return value

cc @Borda @rohitgr7

@thoglu thoglu added bug Something isn't working help wanted Open to be worked on labels Dec 22, 2020
@github-actions
Copy link
Contributor

Hi! thanks for your contribution!, great first issue!

@awaelchli awaelchli added priority: 1 Medium priority task good first issue Good for newcomers labels Dec 23, 2020
@awaelchli awaelchli self-assigned this Dec 23, 2020
@tchaton
Copy link
Contributor

tchaton commented Dec 23, 2020

Dear @thoglu,

Thanks for reporting this bug :)
We will address it soon.

Best regards,
T.C

@ddrevicky
Copy link
Contributor

ddrevicky commented Feb 13, 2021

Hi I wanted to take a look at this but have a couple of questions.

  1. What would be the typical use case for returning a value from the step function? Basic torch optimizers return the result of computing the closure (if passed) e.g., Adam.

    • But in PL, for example in NativeMixedPrecisionPlugin closure is called before the optimizer.step and is not passed to step as arg (I think the reason is that torch.cuda.amp does not support passing closure to step). If one of the basic torch optimizers was used it would probably return None in this case.
  2. Code-wise in PL it might be a little harder now to determine which value should be returned. Accelerator::optimizer_step has hooks for precision plugins for pre and post steps. In case of NativeMixedPrecisionPlugin the optimizer.step is actually called in the post_optimizer_step step while the Accelerator::run_optimizer_step is not executed.

@thoglu
Copy link
Author

thoglu commented Feb 13, 2021

The use case (in my case, but I think more broadly) is for allowing to return optimizer-specific logging values for debug purposes. In Adam, for example, one could return first and second moments, and see how they evolve over time, as these are the central quantities of the optimizer. The logging can then happen in the main training loop function of PL.

@stale
Copy link

stale bot commented Mar 19, 2021

This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team!

@stale stale bot added the won't fix This will not be worked on label Mar 19, 2021
@awaelchli awaelchli removed the won't fix This will not be worked on label Mar 20, 2021
@stale
Copy link

stale bot commented Apr 23, 2021

This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team!

@stale stale bot added the won't fix This will not be worked on label Apr 23, 2021
@awaelchli awaelchli added this to the v1.4 milestone Apr 23, 2021
@stale stale bot removed the won't fix This will not be worked on label Apr 23, 2021
@edenlightning edenlightning removed this from the v1.4 milestone Jul 1, 2021
@stale
Copy link

stale bot commented Jul 31, 2021

This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team!

@stale stale bot added the won't fix This will not be worked on label Jul 31, 2021
@awaelchli awaelchli added this to the v1.5 milestone Aug 1, 2021
@stale stale bot removed the won't fix This will not be worked on label Aug 1, 2021
@awaelchli awaelchli modified the milestones: v1.5, 1.5.x Nov 4, 2021
@carmocca carmocca added feature Is an improvement or enhancement and removed bug Something isn't working labels Feb 3, 2022
@carmocca carmocca modified the milestones: 1.5.x, 1.6 Feb 3, 2022
@carmocca carmocca added optimizer and removed priority: 1 Medium priority task labels Feb 3, 2022
@carmocca carmocca removed help wanted Open to be worked on good first issue Good for newcomers labels Feb 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement optimizer
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants