-
-
Notifications
You must be signed in to change notification settings - Fork 161
Description
When using auto_lr_find: True in the TrainerConfig, the discovered 'suggested' learning rate is not set or updated in the Trainer.
BaseModel.train calls self._prepare_for_training() at L649 which in turn calls self._prepare_trainer where the Trainer object is configured. This sets the learning_rate for all optimizers to TrainerConfig.learning_rate before the auto_lr_find block at L655.
The PL Tuner's lr_find method by default updates the learning rate hparam, but since the Trainer and the optimizer have already been configured, this change to hparams doesn't get reflected during training.
So: Is this intended behavior or a bug? If it not intended, it seems to me that .self._prepare_for_training() should be called again after auto_lr_find completes, somewhere around L676
I am going to assume its a bug and submit a PR as described above. If the intention is NOT to update the learning rate using the suggested LR then I propose the documentation be updated to clarify this and update_attrs: False parameter be passed to Tuner.lr_find to supress the log line saying that the learning rate has been updated.
edit:
I attempted a PR, but just calling _prepare_for_training() or _prepare_trainer() is not enough since the optimizers are configured in TabularModel.prepare_model() and its params are not available to the train method as currently written. I'm not sure of the best approach to fixing this if it is a bug but I would be happy to submit a PR if @manujosephv or someone has ideas about how best to update the optimizers to use the suggested LR.
Cheers