Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: Add BackboneLambdaFinetunningCallback #5377

Merged
merged 23 commits into from
Jan 8, 2021

Conversation

tchaton
Copy link
Contributor

@tchaton tchaton commented Jan 6, 2021

What does this PR do?

This PR adds a Fine-tuning Callback + a new callback on_before_accelerator_backend_setup + update fine-tuning example to use the new Callback. The example was broken and didn't converge.

Screenshot 2021-01-07 at 15 12 08

Fixes # (issue) <- this links related issue to this PR

Before submitting

  • Was this discussed/approved via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or internal minor changes/refactorings)

PR review

Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified
  • Check that target branch and milestone match!

Did you have fun?

Make sure you had fun coding 🙃

@tchaton tchaton added this to the 1.2 milestone Jan 6, 2021
@tchaton tchaton self-assigned this Jan 6, 2021
@pep8speaks
Copy link

pep8speaks commented Jan 6, 2021

Hello @tchaton! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2021-01-08 17:42:50 UTC

@tchaton tchaton added design Includes a design discussion feature Is an improvement or enhancement callback labels Jan 6, 2021
@codecov
Copy link

codecov bot commented Jan 6, 2021

Codecov Report

Merging #5377 (bceb0c1) into release/1.2-dev (0fc264a) will decrease coverage by 0%.
The diff coverage is 87%.

@@               Coverage Diff                @@
##           release/1.2-dev   #5377    +/-   ##
================================================
- Coverage               93%     93%    -0%     
================================================
  Files                  150     151     +1     
  Lines                10490   10592   +102     
================================================
+ Hits                  9719    9808    +89     
- Misses                 771     784    +13     

pytorch_lightning/callbacks/__init__.py Outdated Show resolved Hide resolved
pytorch_lightning/callbacks/__init__.py Outdated Show resolved Hide resolved
@justusschock
Copy link
Member

Looks good to me, I just would also expose the BaseFineTuningCallback as we also expose ProgressbarBase

tchaton and others added 2 commits January 7, 2021 15:22
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
def finetunning_function(self, pl_module: pl.LightningModule, epoch: int, optimizer: Optimizer, opt_idx: int):
if epoch == self.milestones[0]:
unfreeze_and_add_param_group(
module=pl_module.feature_extractor[-5:],
Copy link
Contributor

@SeanNaren SeanNaren Jan 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add some info of what is actually be unfrozen here?

Copy link
Contributor

@SeanNaren SeanNaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aside from small spelling nits, everything looks good to me! Also like the new hook, was thinking if we could simplify the name to on_before_accelerator_setup but either way, more hooks the better!

class FinetunningBoringModel(BoringModel):
def __init__(self):
super().__init__()
self.backbone = nn.Sequential(nn.Linear(32, 32, bias=False), nn.BatchNorm1d(32), nn.ReLU())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it work without a backbone, with pure BoringModel?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it expects a module backbone.

@@ -27,6 +27,10 @@ class Callback(abc.ABC):
Subclass this class and override any of the relevant hooks
"""

def on_before_accelerator_backend_setup(self, trainer, pl_module):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to add a new callback hook? can't it be done with the existing ones?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great question !

We have on_init_start, on_init_end in init function. We don't have access to the model.

Then we have on_fit_start in fit function, but configure_optimizers had already being called during accelerator_backend.setup().

Therefore, I needed to introduce a new callback, so I could freeze the model before configure_optimizers call and then filter parameters if they don't require gradients with
optimizer = optim.Adam(filter(lambda p: p.requires_grad, self.parameters()), lr=self.lr)

Hope it answer your question.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case, I think on_before_configure_optimizers would be a better name. Easier for users to understand.

Can you also mention the added hook in the CHANGELOG?

@tchaton tchaton enabled auto-merge (squash) January 8, 2021 17:42
@tchaton tchaton merged commit 48718d7 into release/1.2-dev Jan 8, 2021
@tchaton tchaton deleted the feat/finetunning_callback branch January 8, 2021 21:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
callback design Includes a design discussion feature Is an improvement or enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants