New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Box constraints for optimizers #22281
Comments
Superset of #6564 |
Thanks for this superset of the box-constraint feature request on LBFGS (i.e. LBFGS-B) @vincentqb . The contributed #938 was closed for being stale and also not the correct solution. However, this feature would be very useful for some inverse image reconstruction problems (used in privacy analysis of DCNNs, for example). Should one attempt to resurrect the old #938 and address your concerns (correct implementation, align with current state of the code, etc.) in a followup PR specific for LBFGS, or are you looking for contributions only to address the more meta-challenge of box-constraints across all optimizers? |
If there are no other volunteers, I would love to work on this after I am done with functions of a matrix. I used a box-constrained Newton in my research, based on the method by Bertsekas, which I had to implement from scratch, so it should not take that much time... |
Has there been any progress in adding bounds to LBFGS (LBFGS-B) in pytorch? It would be very useful. |
馃殌 Feature
Offer the option to set box constraints for optimizers.
Motivation
In some applications of reinforcement learning, falling outside of a region correspond to inadmissible actions.
Alternatives
One can omit the constraints, and hope that the optimal parameters found happen to be within the constraints, without any guarantees.
Additional context
This was suggested in #938 for torch.optim.lbfgs.
CC @bamos
The text was updated successfully, but these errors were encountered: