You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to set different learning rate adjustment lr_schedulers for different parameter groups in one optimizer. This motivation is easy to understand. For example, add my own module to the pre-training network for fine-tuning.
What have you tried?
I have tried the following method:
def configure_optimizers(self):
opt = torch.optim.SGD([
dict(params = self.net.net.parameters(),lr=0.1*self.lr),
dict(params=[*self.net.pool.parameters(),
* self.net.fc.parameters(),
*self.center_loss.parameters()])
],lr=self.lr,momentum=0.98)
As self.net.net is a pre-trained backbone, I only want the learning rate of this group to be 1/10 of others'.
However, the learning rates of the two params groups are identified with this code. For example, both are 1e-5 at the training start, and I want them to be 1e-6 and 1e-5, respectively.
Since "automatic_optimizition = False" has bugs to fix, I confused want should I do.
The text was updated successfully, but these errors were encountered:
When you set base_lr=1e-5 (second argument) in torch.optim.lr_scheduler.CyclicLR it overwrites the initial learning rate of all the optimizers parameter groups, which is why they are the same. Instead you should set base_lr=[1e-6, 1e-5].
Thank you very much for your help.
I changed the code so that the program can run without errors.
However, the learning rate does not seem to have adjusted.
def configure_optimizers(self):
opt = torch.optim.Adam([dict(params=self.net.net.parameters()),
dict(params=[*self.net.pool.parameters(),
*self.net.fc.parameters(),
*self.center_loss.parameters(),
])],)
sdl = torch.optim.lr_scheduler.CyclicLR(opt, base_lr=[5e-5, 5e-5], max_lr=[5e-4, 5e-4],
step_size_up=1000, step_size_down=500,
mode='exp_range', gamma=0.5,cycle_momentum=False,)
return [opt], [dict(scheduler=sdl, interval='step')]
❓ Questions and Help
What is your question?
I want to set different learning rate adjustment lr_schedulers for different parameter groups in one optimizer. This motivation is easy to understand. For example, add my own module to the pre-training network for fine-tuning.
What have you tried?
I have tried the following method:
def configure_optimizers(self):
opt = torch.optim.SGD([
dict(params = self.net.net.parameters(),lr=0.1*self.lr),
dict(params=[*self.net.pool.parameters(),
* self.net.fc.parameters(),
*self.center_loss.parameters()])
],lr=self.lr,momentum=0.98)
As self.net.net is a pre-trained backbone, I only want the learning rate of this group to be 1/10 of others'.
However, the learning rates of the two params groups are identified with this code. For example, both are 1e-5 at the training start, and I want them to be 1e-6 and 1e-5, respectively.
Since "automatic_optimizition = False" has bugs to fix, I confused want should I do.
The text was updated successfully, but these errors were encountered: