Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Improvement] Update NasMutator to build search_space in NAS #426

Merged
merged 19 commits into from
Feb 1, 2023

Conversation

gaoyang07
Copy link
Contributor

@gaoyang07 gaoyang07 commented Jan 6, 2023

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

Please describe the motivation of this PR and the goal you want to achieve through this PR.

Modification

  1. unify mutators for NAS algorithms as the NasMutator;
  2. regard ChannelMutator as pruning-specified;
  3. remove value_mutators & module_mutators;
  4. set GroupMixin only for NAS;
  5. add UT for NasMutator;
  6. fix bugs.

BC-breaking (Optional)

Does the modification introduce changes that break the backward compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.

Checklist

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues.
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
  • The documentation has been modified accordingly, like docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
  • CLA has been signed and all committers have signed the CLA in this PR.

@sunnyxiaohu sunnyxiaohu self-requested a review January 9, 2023 10:44
fix_mutable = copied_model.search_subnet()
copied_model.set_subnet(copied_model.sample_subnet())

fix_mutable = export_fix_subnet(copied_model)[0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unify fix_mutable to subnet_dict.

@@ -39,7 +39,7 @@ def run(self):

def _save_fix_subnet(self):
"""Save model subnet config."""
# TO DO: Modify export_fix_subnet's output. Might contain weight return
# TODO: Modify export_fix_subnet's output. Might contain weight return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

del this line (already done.)


copied_model = copy.deepcopy(self)
fix_mutable = copied_model.search_subnet()
copied_model.set_subnet(copied_model.sample_subnet())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unify fix_mutable to subnet_dict

therefore it currently supports NAS/Pruning algorithms with mutator(s).
"""

def _build_search_space(self, prefix=''):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add necessary hint for users that mutator.prepare_from_supernet has to be called before _build_search_space.

def sample_subnet(self, kind='random') -> Dict:
"""Random sample subnet by mutator."""
subnet = dict()
for name, modules in self.search_space.items():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unify 'name' and 'group_id' when traversing self.search_space for sample_subnet, set_subnet, max_subnet and so on.

gaoyang07 and others added 6 commits January 16, 2023 16:58
1. unify mutators for NAS algorithms as the NasMutator;
2. regard ChannelMutator as pruning-specified;
3. remove value_mutators & module_mutators;
4. set GroupMixin only for NAS;
5. revert all changes in ChannelMutator.
@@ -6,7 +6,6 @@
type='sub_model',
cfg=dict(
cfg_path='mmcls::resnet/resnet50_8xb32_in1k.py', pretrained=False),
fix_subnet='configs/pruning/mmcls/dcff/fix_subnet.json',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert

@@ -5,7 +5,6 @@
_scope_='mmrazor',
type='sub_model',
cfg=_base_.architecture,
fix_subnet='configs/pruning/mmdet/dcff/fix_subnet.json',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert

@@ -5,7 +5,6 @@
_scope_='mmrazor',
type='sub_model',
cfg=_base_.architecture,
fix_subnet='configs/pruning/mmpose/dcff/fix_subnet.json',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert

@@ -5,7 +5,6 @@
_scope_='mmrazor',
type='sub_model',
cfg=_base_.architecture,
fix_subnet='configs/pruning/mmseg/dcff/fix_subnet.json',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert

from mmrazor.registry import MODELS
from mmrazor.utils import ValidFixMutable
from ..base import BaseAlgorithm, LossResults

VALID_MUTATOR_TYPE = Union[BaseMutator, Dict]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use unify NasMutator for autoformer

from mmrazor.models.mutators import ChannelMutator

copied_model = copy.deepcopy(model)
if isinstance(model.mutator, ChannelMutator):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that all conditions could be handled by _dynamic_to_static.

def set_choices(self, choices: Dict) -> None:
"""Set choices for each mutable in search space."""
for name, mutables in self.search_groups.items():
if name not in choices:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alias have been already handled in GroupMixin when building search_space.

mutable_expand_ratio2 = copy.deepcopy(mutable_expand_ratio)
mutable_expand_ratio2.alias += '_se'

derived_se_channels = mutable_expand_ratio2 * mutable_in_channels
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes for what?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid the repeat of alias when it comes to fine_grained_mode.

@gaoyang07 gaoyang07 changed the title [Improvement] Unify search_groups in mutators as search_space [Improvement] Update NasMutator to build search_space in NAS Feb 1, 2023
@@ -129,8 +130,7 @@ def __init__(self,
self.predictor_cfg = predictor_cfg
if self.predictor_cfg is not None:
self.predictor_cfg['score_key'] = self.score_key
self.predictor_cfg['search_groups'] = \
self.model.mutator.search_groups
self.predictor_cfg['search_groups'] = self.model.search_space
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert

copied_model = copy.deepcopy(model)
if hasattr(model, 'mutator') and \
isinstance(model.mutator, ChannelMutator):
_dynamic_to_static(copied_model)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The two conditions could be summarized by _dynamic_to_static?

# NOTE: with `ModuleMutable` as mutable, the keys in
# self.mutator.arch_params must contain the prefix `module`.
# See `prepare_from_supernet` in `NasMutator` for details.
probs = F.softmax(self.mutator.arch_params['module_' + str(k)],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

replace fixed string ‘module_’ with mutator.search_groups.items()[0].mutable_prefix

# NOTE: with `ModuleMutable` as mutable, the keys in
# self.mutator.arch_params must contain the prefix `module`.
# See `prepare_from_supernet` in `NasMutator` for details.
self.mutator.arch_params['module_' + str(k)].grad.data.mul_(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

replace fixed string ‘module_’ with mutator.search_groups.items()[0].mutable_prefix

@codecov
Copy link

codecov bot commented Feb 1, 2023

Codecov Report

Base: 80.24% // Head: 79.97% // Decreases project coverage by -0.27% ⚠️

Coverage data is based on head (9a8b0d8) compared to base (1c47009).
Patch coverage: 88.03% of modified lines in pull request are covered.

Additional details and impacted files
@@             Coverage Diff             @@
##           dev-1.x     #426      +/-   ##
===========================================
- Coverage    80.24%   79.97%   -0.27%     
===========================================
  Files          256      251       -5     
  Lines        12748    12821      +73     
  Branches      1943     1989      +46     
===========================================
+ Hits         10229    10254      +25     
- Misses        2113     2156      +43     
- Partials       406      411       +5     
Flag Coverage Δ
unittests 79.97% <88.03%> (-0.27%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmrazor/engine/hooks/estimate_resources_hook.py 36.17% <0.00%> (-0.79%) ⬇️
mmrazor/engine/runner/iteprune_val_loop.py 34.48% <0.00%> (-1.24%) ⬇️
mmrazor/engine/runner/subnet_val_loop.py 23.63% <0.00%> (ø)
...zor/models/algorithms/pruning/slimmable_network.py 96.33% <ø> (ø)
...mrazor/models/distillers/configurable_distiller.py 93.47% <ø> (ø)
mmrazor/engine/hooks/dump_subnet_hook.py 35.00% <25.00%> (-0.30%) ⬇️
...bles/mutable_channel/units/mutable_channel_unit.py 91.09% <66.66%> (-0.52%) ⬇️
mmrazor/models/algorithms/nas/dsnas.py 85.00% <71.87%> (-0.17%) ⬇️
mmrazor/models/algorithms/nas/bignas.py 37.22% <73.91%> (-1.83%) ⬇️
...r/models/mutables/mutable_module/mutable_module.py 94.28% <77.77%> (-5.72%) ⬇️
... and 33 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@sunnyxiaohu sunnyxiaohu merged commit a27952d into dev-1.x Feb 1, 2023
@sunnyxiaohu sunnyxiaohu deleted the gy/search_space branch February 1, 2023 14:51
humu789 pushed a commit to humu789/mmrazor that referenced this pull request Feb 13, 2023
* fix ci

* add nvidia key

* remote torch

* recover pytorch
humu789 pushed a commit to humu789/mmrazor that referenced this pull request Feb 13, 2023
* docs(docs/zh_cn): add doc and link checker

* docs(REAME): update

* docs(docs/zh_cn): update

* docs(benchmark): update table

* docs(zh_cn/benchmark): update link

* CI(docs): update link check

* ci(doc): update checker

* docs(zh_cn): update

* style(ci): remove useless para

* style(ci): update

* docs(zh_cn): update

* docs(benchmark.md): fix mobilnet link error

* docs(docs/zh_cn): add doc and link checker

* docs(REAME): update

* docs(docs/zh_cn): update

* docs(benchmark): update table

* docs(zh_cn/benchmark): update link

* CI(docs): update link check

* ci(doc): update checker

* docs(zh_cn): update

* style(ci): remove useless para

* style(ci): update

* docs(zh_cn): update

* docs(benchmark.md): fix mobilnet link error

* docs(zh_cn/do_regression_test.md): rebase

* docs(docs/zh_cn): add doc and link checker

* Update README_zh-CN.md

* Update README_zh-CN.md

* Update index.rst

* Update check-doc-link.yml

* [Fix] Fix ci (open-mmlab#426)

* fix ci

* add nvidia key

* remote torch

* recover pytorch

* ci(codecov): ignore ci

* docs(zh_cn): add get_started.md

* docs(zh_cn): fix review advice

* docs(readthedocs): update

* docs(zh_CN): update

* docs(zh_CN): revert

* fix(docs): review advices

* fix(docs): review advices

* fix(docs): review

Co-authored-by: q.yao <streetyao@live.com>
humu789 pushed a commit to humu789/mmrazor that referenced this pull request Feb 13, 2023
* refactor(onnx2ncnn.cpp): split it to shape_inference, pass and utils

* refactor(onnx2ncnn.cpp): split it to shape_inference, pass and utils

* refactor(onnx2ncnn.cpp): split code

* refactor(net_module.cpp): fix build error

* ci(test_onnx2ncnn.py): add generate model adn run

* ci(onnx2ncnn): add ncnn backend

* ci(test_onnx2ncnn): add converted onnx model`

* ci(onnx2ncnn): fix ncnn tar

* ci(backed-ncnn): simplify dependency install

* ci(onnx2ncnn): fix apt install

* Update backend-ncnn.yml

* Update backend-ncnn.yml

* Update backend-ncnn.yml

* Update backend-ncnn.yml

* Update backend-ncnn.yml

* Update backend-ncnn.yml

* Update backend-ncnn.yml

* Update backend-ncnn.yml

* Update backend-ncnn.yml

* Update backend-ncnn.yml

* Update backend-ncnn.yml

* fix(ci): add include algorithm

* Update build.yml

* parent aa85760
author q.yao <streetyao@live.com> 1651287879 +0800
committer tpoisonooo <khj.application@aliyun.com> 1652169959 +0800

[Fix] Fix ci (open-mmlab#426)

* fix ci

* add nvidia key

* remote torch

* recover pytorch

refactor(onnx2ncnn.cpp): split it to shape_inference, pass and utils

* fix(onnx2ncnn): review

* fix(onnx2ncnn): build error

Co-authored-by: q.yao <streetyao@live.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants