Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add the frozen function for Swin Transformer model #574

Merged
merged 12 commits into from
Dec 7, 2021

Conversation

fangxu622
Copy link
Contributor

Motivation

Adding feature request. It provides frozen function for swin transformer by adding add frozen(bool) parameter .

Modification

Please briefly describe what modification is made in this PR.

BC-breaking (Optional)

Does the modification introduce changes that break the backward compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.

Checklist

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues.
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
  • The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  • The documentation has been modified accordingly, like docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
  • CLA has been signed and all committers have signed the CLA in this PR.

@mzr1996
Copy link
Member

mzr1996 commented Dec 1, 2021

Thanks for your contribution, we will review it soon.

@CLAassistant
Copy link

CLAassistant commented Dec 1, 2021

CLA assistant check
All committers have signed the CLA.

Comment on lines 454 to 455
if isinstance(m, nn.modules.LayerNorm):
m.eval()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does LayerNorm need to turn to evaluation mode? Unlike BatchNorm, LayerNorm should have the same behavior in both training mode and eval mode.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, we should remove the LayerNorm instead of _BatchNorm, because eval has an effect on BatchNorm only.
Although SwinTranformer doesn't use BatchNorm, we support using norm_cfg to replace norm type.

@codecov
Copy link

codecov bot commented Dec 3, 2021

Codecov Report

Merging #574 (cd67e14) into master (c090d3f) will increase coverage by 0.07%.
The diff coverage is 100.00%.

❗ Current head cd67e14 differs from pull request most recent head 592c724. Consider uploading reports for the commit 592c724 to get more accurate results
Impacted file tree graph

@@            Coverage Diff             @@
##           master     #574      +/-   ##
==========================================
+ Coverage   79.75%   79.83%   +0.07%     
==========================================
  Files         107      107              
  Lines        6120     6144      +24     
  Branches     1046     1056      +10     
==========================================
+ Hits         4881     4905      +24     
  Misses       1105     1105              
  Partials      134      134              
Flag Coverage Δ
unittests 79.83% <100.00%> (+0.07%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmcls/models/backbones/swin_transformer.py 91.86% <100.00%> (+1.31%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f2c1c57...592c724. Read the comment docs.

@@ -291,6 +291,8 @@ def __init__(self,
use_abs_pos_embed=False,
auto_pad=False,
with_cp=False,
frozen=False,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refers to ResNet , it's better to unify the API and use frozen_stages (int) to support freezing before a specific stage.

Comment on lines 454 to 455
if isinstance(m, nn.modules.LayerNorm):
m.eval()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, we should remove the LayerNorm instead of _BatchNorm, because eval has an effect on BatchNorm only.
Although SwinTranformer doesn't use BatchNorm, we support using norm_cfg to replace norm type.

Copy link
Member

@mzr1996 mzr1996 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mzr1996 mzr1996 changed the title Add the frozen function for Swin Transformer model [Feature] Add the frozen function for Swin Transformer model Dec 7, 2021
@mzr1996 mzr1996 merged commit 0aa789f into open-mmlab:master Dec 7, 2021
mzr1996 added a commit to mzr1996/mmpretrain that referenced this pull request Nov 24, 2022
…mlab#574)

* Add the frozen function for Swin Transformer model

* add frozen parameter for swin transformer model

* add norm_eval parameter

* Delete =11.1

* Delete =418,driver

* delete _BatchNorm

* remove LayerNorm , add _BatchNorm

* unifying the style of frozen function refer ResNet

* Improve docs and add unit tests.

Co-authored-by: cxiang26 <cq.xiang@foxmail.com>
Co-authored-by: mzr1996 <mzr1996@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants