You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The trick of creating a global_step property that is manual incremented does not increase the global_step of the trainer which is used by the ModelCheckPoint callback (Lightning-AI/pytorch-lightning#17167). A workaround is simply to call a step on a dummy optimiser with self.optimizers().step() inside the training_step method.
self.optimizers() returns a wrapped version of the optimiser(s) in configure_optimizers(), as this is 'None' in the examples given (due to manually doing opt with whatever jax optimiser) it does nothing but increase the step counter.
The text was updated successfully, but these errors were encountered:
The trick of creating a global_step property that is manual incremented does not increase the global_step of the trainer which is used by the ModelCheckPoint callback (Lightning-AI/pytorch-lightning#17167). A workaround is simply to call a step on a dummy optimiser with self.optimizers().step() inside the training_step method.
self.optimizers() returns a wrapped version of the optimiser(s) in configure_optimizers(), as this is 'None' in the examples given (due to manually doing opt with whatever jax optimiser) it does nothing but increase the step counter.
The text was updated successfully, but these errors were encountered: