-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Improvement] Update NasMutator to build search_space in NAS #426
Conversation
fix_mutable = copied_model.search_subnet() | ||
copied_model.set_subnet(copied_model.sample_subnet()) | ||
|
||
fix_mutable = export_fix_subnet(copied_model)[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unify fix_mutable
to subnet_dict
.
@@ -39,7 +39,7 @@ def run(self): | |||
|
|||
def _save_fix_subnet(self): | |||
"""Save model subnet config.""" | |||
# TO DO: Modify export_fix_subnet's output. Might contain weight return | |||
# TODO: Modify export_fix_subnet's output. Might contain weight return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
del this line (already done.)
|
||
copied_model = copy.deepcopy(self) | ||
fix_mutable = copied_model.search_subnet() | ||
copied_model.set_subnet(copied_model.sample_subnet()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unify fix_mutable
to subnet_dict
therefore it currently supports NAS/Pruning algorithms with mutator(s). | ||
""" | ||
|
||
def _build_search_space(self, prefix=''): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add necessary hint for users that mutator.prepare_from_supernet
has to be called before _build_search_space
.
def sample_subnet(self, kind='random') -> Dict: | ||
"""Random sample subnet by mutator.""" | ||
subnet = dict() | ||
for name, modules in self.search_space.items(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unify 'name' and 'group_id' when traversing self.search_space for sample_subnet, set_subnet, max_subnet and so on.
90f2da5
to
7866460
Compare
1. unify mutators for NAS algorithms as the NasMutator; 2. regard ChannelMutator as pruning-specified; 3. remove value_mutators & module_mutators; 4. set GroupMixin only for NAS; 5. revert all changes in ChannelMutator.
@@ -6,7 +6,6 @@ | |||
type='sub_model', | |||
cfg=dict( | |||
cfg_path='mmcls::resnet/resnet50_8xb32_in1k.py', pretrained=False), | |||
fix_subnet='configs/pruning/mmcls/dcff/fix_subnet.json', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
revert
@@ -5,7 +5,6 @@ | |||
_scope_='mmrazor', | |||
type='sub_model', | |||
cfg=_base_.architecture, | |||
fix_subnet='configs/pruning/mmdet/dcff/fix_subnet.json', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
revert
@@ -5,7 +5,6 @@ | |||
_scope_='mmrazor', | |||
type='sub_model', | |||
cfg=_base_.architecture, | |||
fix_subnet='configs/pruning/mmpose/dcff/fix_subnet.json', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
revert
@@ -5,7 +5,6 @@ | |||
_scope_='mmrazor', | |||
type='sub_model', | |||
cfg=_base_.architecture, | |||
fix_subnet='configs/pruning/mmseg/dcff/fix_subnet.json', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
revert
from mmrazor.registry import MODELS | ||
from mmrazor.utils import ValidFixMutable | ||
from ..base import BaseAlgorithm, LossResults | ||
|
||
VALID_MUTATOR_TYPE = Union[BaseMutator, Dict] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use unify NasMutator
for autoformer
from mmrazor.models.mutators import ChannelMutator | ||
|
||
copied_model = copy.deepcopy(model) | ||
if isinstance(model.mutator, ChannelMutator): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that all conditions could be handled by _dynamic_to_static
.
def set_choices(self, choices: Dict) -> None: | ||
"""Set choices for each mutable in search space.""" | ||
for name, mutables in self.search_groups.items(): | ||
if name not in choices: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alias
have been already handled in GroupMixin when building search_space.
mutable_expand_ratio2 = copy.deepcopy(mutable_expand_ratio) | ||
mutable_expand_ratio2.alias += '_se' | ||
|
||
derived_se_channels = mutable_expand_ratio2 * mutable_in_channels |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes for what?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid the repeat of alias when it comes to fine_grained_mode
.
4764e25
to
9ba66eb
Compare
9ba66eb
to
8fdcf09
Compare
@@ -129,8 +130,7 @@ def __init__(self, | |||
self.predictor_cfg = predictor_cfg | |||
if self.predictor_cfg is not None: | |||
self.predictor_cfg['score_key'] = self.score_key | |||
self.predictor_cfg['search_groups'] = \ | |||
self.model.mutator.search_groups | |||
self.predictor_cfg['search_groups'] = self.model.search_space |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
revert
copied_model = copy.deepcopy(model) | ||
if hasattr(model, 'mutator') and \ | ||
isinstance(model.mutator, ChannelMutator): | ||
_dynamic_to_static(copied_model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The two conditions could be summarized by _dynamic_to_static
?
# NOTE: with `ModuleMutable` as mutable, the keys in | ||
# self.mutator.arch_params must contain the prefix `module`. | ||
# See `prepare_from_supernet` in `NasMutator` for details. | ||
probs = F.softmax(self.mutator.arch_params['module_' + str(k)], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replace fixed string ‘module_’ with mutator.search_groups.items()[0].mutable_prefix
# NOTE: with `ModuleMutable` as mutable, the keys in | ||
# self.mutator.arch_params must contain the prefix `module`. | ||
# See `prepare_from_supernet` in `NasMutator` for details. | ||
self.mutator.arch_params['module_' + str(k)].grad.data.mul_( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replace fixed string ‘module_’ with mutator.search_groups.items()[0].mutable_prefix
Codecov ReportBase: 80.24% // Head: 79.97% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## dev-1.x #426 +/- ##
===========================================
- Coverage 80.24% 79.97% -0.27%
===========================================
Files 256 251 -5
Lines 12748 12821 +73
Branches 1943 1989 +46
===========================================
+ Hits 10229 10254 +25
- Misses 2113 2156 +43
- Partials 406 411 +5
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
* fix ci * add nvidia key * remote torch * recover pytorch
* docs(docs/zh_cn): add doc and link checker * docs(REAME): update * docs(docs/zh_cn): update * docs(benchmark): update table * docs(zh_cn/benchmark): update link * CI(docs): update link check * ci(doc): update checker * docs(zh_cn): update * style(ci): remove useless para * style(ci): update * docs(zh_cn): update * docs(benchmark.md): fix mobilnet link error * docs(docs/zh_cn): add doc and link checker * docs(REAME): update * docs(docs/zh_cn): update * docs(benchmark): update table * docs(zh_cn/benchmark): update link * CI(docs): update link check * ci(doc): update checker * docs(zh_cn): update * style(ci): remove useless para * style(ci): update * docs(zh_cn): update * docs(benchmark.md): fix mobilnet link error * docs(zh_cn/do_regression_test.md): rebase * docs(docs/zh_cn): add doc and link checker * Update README_zh-CN.md * Update README_zh-CN.md * Update index.rst * Update check-doc-link.yml * [Fix] Fix ci (open-mmlab#426) * fix ci * add nvidia key * remote torch * recover pytorch * ci(codecov): ignore ci * docs(zh_cn): add get_started.md * docs(zh_cn): fix review advice * docs(readthedocs): update * docs(zh_CN): update * docs(zh_CN): revert * fix(docs): review advices * fix(docs): review advices * fix(docs): review Co-authored-by: q.yao <streetyao@live.com>
* refactor(onnx2ncnn.cpp): split it to shape_inference, pass and utils * refactor(onnx2ncnn.cpp): split it to shape_inference, pass and utils * refactor(onnx2ncnn.cpp): split code * refactor(net_module.cpp): fix build error * ci(test_onnx2ncnn.py): add generate model adn run * ci(onnx2ncnn): add ncnn backend * ci(test_onnx2ncnn): add converted onnx model` * ci(onnx2ncnn): fix ncnn tar * ci(backed-ncnn): simplify dependency install * ci(onnx2ncnn): fix apt install * Update backend-ncnn.yml * Update backend-ncnn.yml * Update backend-ncnn.yml * Update backend-ncnn.yml * Update backend-ncnn.yml * Update backend-ncnn.yml * Update backend-ncnn.yml * Update backend-ncnn.yml * Update backend-ncnn.yml * Update backend-ncnn.yml * Update backend-ncnn.yml * fix(ci): add include algorithm * Update build.yml * parent aa85760 author q.yao <streetyao@live.com> 1651287879 +0800 committer tpoisonooo <khj.application@aliyun.com> 1652169959 +0800 [Fix] Fix ci (open-mmlab#426) * fix ci * add nvidia key * remote torch * recover pytorch refactor(onnx2ncnn.cpp): split it to shape_inference, pass and utils * fix(onnx2ncnn): review * fix(onnx2ncnn): build error Co-authored-by: q.yao <streetyao@live.com>
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
Please describe the motivation of this PR and the goal you want to achieve through this PR.
Modification
NasMutator
;ChannelMutator
as pruning-specified;NasMutator
;BC-breaking (Optional)
Does the modification introduce changes that break the backward compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
Checklist
Before PR:
After PR: