Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] ZeroRedundancyOptimizer ambiguous error with param groups when pytorch < 1.12.0 #818

Merged
merged 7 commits into from Dec 19, 2022

Conversation

C1rN09
Copy link
Collaborator

@C1rN09 C1rN09 commented Dec 12, 2022

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

See issue #778

Modification

Add check for param groups and torch version, and give a better instruction to users.

BC-breaking (Optional)

No

Use cases (Optional)

Same as before

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMCls.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

@CLAassistant
Copy link

CLAassistant commented Dec 12, 2022

CLA assistant check
All committers have signed the CLA.

@C1rN09 C1rN09 changed the title [Fix] zero_optimizer error with param groups when pytorch < 1.12.0 [Fix] ZeroRedundancyOptimizer ambiguous error with param groups when pytorch < 1.12.0 Dec 12, 2022
@codecov
Copy link

codecov bot commented Dec 12, 2022

Codecov Report

Base: 78.66% // Head: 78.86% // Increases project coverage by +0.19% 🎉

Coverage data is based on head (f8c57b5) compared to base (fe26c65).
Patch coverage: 26.82% of modified lines in pull request are covered.

❗ Current head f8c57b5 differs from pull request most recent head 1228931. Consider uploading reports for the commit 1228931 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #818      +/-   ##
==========================================
+ Coverage   78.66%   78.86%   +0.19%     
==========================================
  Files         128      128              
  Lines        9348     9368      +20     
  Branches     1848     1857       +9     
==========================================
+ Hits         7354     7388      +34     
+ Misses       1679     1666      -13     
+ Partials      315      314       -1     
Flag Coverage Δ
unittests 78.86% <26.82%> (+0.19%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmengine/optim/optimizer/default_constructor.py 98.23% <ø> (ø)
mmengine/optim/optimizer/zero_optimizer.py 52.17% <0.00%> (-4.97%) ⬇️
mmengine/registry/build_functions.py 80.20% <ø> (ø)
mmengine/registry/default_scope.py 100.00% <ø> (ø)
mmengine/visualization/vis_backend.py 84.49% <ø> (ø)
mmengine/visualization/visualizer.py 93.19% <ø> (ø)
mmengine/runner/checkpoint.py 52.07% <11.11%> (+8.11%) ⬆️
mmengine/model/weight_init.py 35.63% <33.33%> (+0.12%) ⬆️
mmengine/optim/optimizer/optimizer_wrapper.py 62.06% <71.42%> (-0.44%) ⬇️
mmengine/optim/optimizer/optimizer_wrapper_dict.py 98.50% <100.00%> (ø)
... and 1 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@zhouzaida
Copy link
Member

Please update unit tests in

class TestZeroOptimizer(MultiProcessTestCase):

nijkah
nijkah previously approved these changes Dec 13, 2022
Copy link
Contributor

@nijkah nijkah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@C1rN09 C1rN09 force-pushed the fix/zero_optimizer_param_group branch from 11a8770 to b2ac889 Compare December 13, 2022 07:47
zhouzaida
zhouzaida previously approved these changes Dec 13, 2022
Copy link
Collaborator

@HAOCHENYE HAOCHENYE left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also test has the overriden function state_dict worked

tests/test_optim/test_optimizer/test_optimizer.py Outdated Show resolved Hide resolved
@zhouzaida zhouzaida merged commit 4147e97 into open-mmlab:main Dec 19, 2022
RangiLyu pushed a commit to RangiLyu/mmengine that referenced this pull request Dec 19, 2022
…pytorch < 1.12.0 (open-mmlab#818)

* fix zero_optimizer error with param groups when pytorch < 1.12.0

* add docstring

* fix docstring

* add unittest

* change ut to use a valid paramwise_cfg

* modify ut

* fix as comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug] ZeroRedundancyOptimizer cannot support param-wise settings with torch.__version__ < 1.12.0
5 participants