-
Notifications
You must be signed in to change notification settings - Fork 3.6k
feat: add new callback hook on_before_optimizer_setup
#21272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
feat: add new callback hook on_before_optimizer_setup
#21272
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a new callback hook on_before_optimizer_setup
to provide a safe point between configure_model()
and configure_optimizers()
. The hook is designed to enable callbacks like BaseFinetuning
to safely modify model parameters (such as freezing) before optimizers are created, particularly for LightningModules that instantiate submodules within configure_model()
.
- Adds the new
on_before_optimizer_setup
hook to both callback and module hook interfaces - Updates the trainer execution flow to call this hook only during fitting stage
- Provides comprehensive test coverage for the new hook timing and behavior
Reviewed Changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 1 comment.
Show a summary per file
File | Description |
---|---|
src/lightning/pytorch/trainer/trainer.py |
Added hook invocation in trainer's _run method during fitting stage |
src/lightning/pytorch/core/hooks.py |
Defined the on_before_optimizer_setup hook for LightningModule |
src/lightning/pytorch/callbacks/callback.py |
Defined the on_before_optimizer_setup hook for Callback interface |
tests/tests_pytorch/callbacks/test_callback_hooks.py |
Added comprehensive test for hook timing and behavior |
docs/source-pytorch/common/hooks.rst |
Updated documentation to include the new hook in the execution order |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lgtm!
What does this PR do?
Partially addresses #19658
This PR introduces a new callback hook
on_before_optimizer_setup
to provide a safe point betweenconfigure_model()
andconfigure_optimizers()
.It’s primarily designed to fix incompatibility between
BaseFinetuning
and LightningModules that define submodules insideconfigure_model
(e.g., FSDP, DeepSpeed, etc.).⚙️ Hook order (new)
Example
Use case
BaseFinetuning
’sfreeze_before_training()
must execute before optimizers are created.Previously, there was no hook between
configure_model()
andconfigure_optimizers()
, making it incompatible with modules that instantiate submodules inconfigure_model()
.Before submitting
PR review
Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:
Reviewer checklist
📚 Documentation preview 📚: https://pytorch-lightning--21272.org.readthedocs.build/en/21272/