Adding support for learning rate schedulers #16
gallego-posada
started this conversation in
Ideas
Replies: 1 comment
-
Closed by #18 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Context: Many current training pipelines involve tweaking the step size/learning rate throughout training to achieve good performance. Two common scenarios for this are:
Proposal: Add capabilities to use Pytorch's LR schedulers along with Cooper.
Challenges:
primal_optimizer
as it is "fully instantiated" by the user before creating theConstrainedOptimizer
.dual_optimizer
as we don't have access to the full optimizer before the Lagrange multipliers have been initialized.lr_scheduler.step()
?Beta Was this translation helpful? Give feedback.
All reactions