Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Global_step workaround #1

Open
saschafrey opened this issue Apr 9, 2024 · 0 comments
Open

Global_step workaround #1

saschafrey opened this issue Apr 9, 2024 · 0 comments

Comments

@saschafrey
Copy link

The trick of creating a global_step property that is manual incremented does not increase the global_step of the trainer which is used by the ModelCheckPoint callback (Lightning-AI/pytorch-lightning#17167). A workaround is simply to call a step on a dummy optimiser with self.optimizers().step() inside the training_step method.

self.optimizers() returns a wrapped version of the optimiser(s) in configure_optimizers(), as this is 'None' in the examples given (due to manually doing opt with whatever jax optimiser) it does nothing but increase the step counter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant