Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge pruning into dev-1.x #312

Merged
merged 9 commits into from
Oct 10, 2022
Merged

Conversation

LKJacky
Copy link
Collaborator

@LKJacky LKJacky commented Oct 10, 2022

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Modification

merge pruning into dev-1.x

LKJacky and others added 9 commits September 14, 2022 14:10
* rebase new dev-1.x

* modification for adding config_template

* add docstring to channel_group.py

* add docstring to mutable_channel_group.py

* rm channel_group_cfg from Graph2ChannelGroups

* change choice type of SequentialChannelGroup from float to int

* add a warning about group-wise conv

* restore __init__ of dynamic op

* in_channel_mutable  ->  mutable_in_channel

* rm abstractproperty

* add a comment about VT

* rm registry for ChannelGroup

* MUTABLECHANNELGROUP -> ChannelGroupType

* refine docstring of IndexDict

* update docstring

* update docstring

* is_prunable -> is_mutable

* update docstring

* fix error in pre-commit

* update unittest

* add return type

* unify init_xxx apit

* add unitest about init of MutableChannelGroup

* update according to reviews

* sequential_channel_group -> sequential_mutable_channel_group

Co-authored-by: liukai <liukai@pjlab.org.cn>
* add BaseChannelMutator

* add autoslim

* tmp

* make SequentialMutableChannelGroup accpeted both of num and ratio  as choice. and supports divisior

* update OneShotMutableChannelGroup

* pass supernet training of autoslim

* refine autoslim

* fix bug in OneShotMutableChannelGroup

* refactor make_divisible

* fix spell error:  channl -> channel

* init_using_backward_tracer -> init_from_backward_tracer
init_from_fx_tracer -> init_from_fx_tracer

* refine SequentialMutableChannelGroup

* let mutator support models with dynamicop

* support define search space in model

* tracer_cfg -> parse_cfg

* refine

* using -> from

* update docstring

* update docstring

Co-authored-by: liukai <liukai@pjlab.org.cn>
* refactor slimmable and add l1-norm

* make l1-norm support convnd

* update get_channel_groups

* add  l1-norm_resnet34_8xb32_in1k.py

* add pretrained to resnet34-l1

* remove old channel mutator

* BaseChannelMutator -> ChannelMutator

* update according to reviews

* add readme to l1-norm

* MBV2_slimmable -> MBV2_slimmable_config

Co-authored-by: liukai <liukai@pjlab.org.cn>
* remove old dynamic ops

* move dynamic ops

* clean old mutable_channels

* rm OneShotMutableChannel

* rm MutableChannel

* refine

* refine

* use SquentialMutableChannel to replace OneshotMutableChannel

* refactor dynamicops folder

* let SquentialMutableChannel support float

Co-authored-by: liukai <liukai@pjlab.org.cn>
* base_channel_mutator -> channel_mutator

* init

* update docstring

* allow omitting redundant configs for channel

* add register_mutable_channel_to_a_module to MutableChannelContainer

* update according to reviews 1

* update according to reviews 2

* update according to reviews 3

* remove old docstring

* fix error

* using->from

* update according to reviews

* support self-define input channel number

* update docstring

* chanenl -> channel_elem

Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: jacky <jacky@xx.com>
* refine repr of MutableChannelGroup

* rename folder name

* ChannelGroup -> ChannelUnit

* filename in units folder

* channel_group -> channel_unit

* groups -> units

* group -> unit

* update

* get_mutable_channel_groups -> get_mutable_channel_units

* fix bug

* refine docstring

* fix ci

* fix bug in tracer

Co-authored-by: liukai <liukai@pjlab.org.cn>
* [feature] CONTRASTIVE REPRESENTATION DISTILLATION with dataset wrapper (open-mmlab#281)

* init

* TD: CRDLoss

* complete UT

* fix docstrings

* fix ci

* update

* fix CI

* DONE

* maintain CRD dataset unique funcs as a mixin

* maintain CRD dataset unique funcs as a mixin

* maintain CRD dataset unique funcs as a mixin

* add UT: CRD_ClsDataset

* init

* TODO: UT test formatting.

* init

* crd dataset wrapper

* update docstring

Co-authored-by: huangpengsheng <huangpengsheng@sensetime.com>

* [Improvement] Update estimator with api revision (open-mmlab#277)

* update estimator usage and fix bugs

* refactor api of estimator & add inner check methods

* fix docstrings

* update search loop and config

* fix lint

* update unittest

* decouple mmdet dependency and fix lint

Co-authored-by: humu789 <humu@pjlab.org.cn>

* [Fix] Fix tracer (open-mmlab#273)

* test image_classifier_loss_calculator

* fix backward tracer

* update SingleStageDetectorPseudoLoss

* merge

* [Feature] Add Dsnas Algorithm (open-mmlab#226)

* [tmp] Update Dsnas

* [tmp] refactor arch_loss & flops_loss

* Update Dsnas & MMRAZOR_EVALUATOR:
1. finalized compute_loss & handle_grads in algorithm;
2. add MMRAZOR_EVALUATOR;
3. fix bugs.

* Update lr scheduler & fix a bug:
1. update param_scheduler & lr_scheduler for dsnas;
2. fix a bug of switching to finetune stage.

* remove old evaluators

* remove old evaluators

* update param_scheduler config

* merge dev-1.x into gy/estimator

* add flops_loss in Dsnas using ResourcesEstimator

* get resources before mutator.prepare_from_supernet

* delete unness broadcast api from gml

* broadcast spec_modules_resources when estimating

* update early fix mechanism for Dsnas

* fix merge

* update units in estimator

* minor change

* fix data_preprocessor api

* add flops_loss_coef

* remove DsnasOptimWrapper

* fix bn eps and data_preprocessor

* fix bn weight decay bug

* add betas for mutator optimizer

* set diff_rank_seed=True for dsnas

* fix start_factor of lr when warm up

* remove .module in non-ddp mode

* add GlobalAveragePoolingWithDropout

* add UT for dsnas

* remove unness channel adjustment for shufflenetv2

* update supernet configs

* delete unness dropout

* delete unness part with minor change on dsnas

* minor change on the flag of search stage

* update README and subnet configs

* add UT for OneHotMutableOP

* [Feature] Update train (open-mmlab#279)

* support auto resume

* add enable auto_scale_lr in train.py

* support '--amp' option

* [Fix] Fix darts metafile (open-mmlab#278)

fix darts metafile

* fix ci (open-mmlab#284)

* fix ci for circle ci

* fix bug in test_metafiles

* add  pr_stage_test for github ci

* add multiple version

* fix ut

* fix lint

* Temporarily skip dataset UT

* update github ci

* add github lint ci

* install wheel

* remove timm from requirements

* install wheel when test on windows

* fix error

* fix bug

* remove github windows ci

* fix device error of arch_params when DsnasDDP

* fix CRD dataset ut

* fix scope error

* rm test_cuda in workflows of github

* [Doc] fix typos in en/usr_guides

Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: pppppM <gjf_mail@126.com>
Co-authored-by: gaoyang07 <1546308416@qq.com>
Co-authored-by: huangpengsheng <huangpengsheng@sensetime.com>
Co-authored-by: SheffieldCao <1751899@tongji.edu.cn>

Co-authored-by: P.Huang <37200926+FreakieHuang@users.noreply.github.com>
Co-authored-by: huangpengsheng <huangpengsheng@sensetime.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: humu789 <humu@pjlab.org.cn>
Co-authored-by: whcao <41630003+HIT-cwh@users.noreply.github.com>
Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: pppppM <gjf_mail@126.com>
Co-authored-by: gaoyang07 <1546308416@qq.com>
Co-authored-by: SheffieldCao <1751899@tongji.edu.cn>
* [feature] CONTRASTIVE REPRESENTATION DISTILLATION with dataset wrapper (open-mmlab#281)

* init

* TD: CRDLoss

* complete UT

* fix docstrings

* fix ci

* update

* fix CI

* DONE

* maintain CRD dataset unique funcs as a mixin

* maintain CRD dataset unique funcs as a mixin

* maintain CRD dataset unique funcs as a mixin

* add UT: CRD_ClsDataset

* init

* TODO: UT test formatting.

* init

* crd dataset wrapper

* update docstring

Co-authored-by: huangpengsheng <huangpengsheng@sensetime.com>

* [Improvement] Update estimator with api revision (open-mmlab#277)

* update estimator usage and fix bugs

* refactor api of estimator & add inner check methods

* fix docstrings

* update search loop and config

* fix lint

* update unittest

* decouple mmdet dependency and fix lint

Co-authored-by: humu789 <humu@pjlab.org.cn>

* [Fix] Fix tracer (open-mmlab#273)

* test image_classifier_loss_calculator

* fix backward tracer

* update SingleStageDetectorPseudoLoss

* merge

* [Feature] Add Dsnas Algorithm (open-mmlab#226)

* [tmp] Update Dsnas

* [tmp] refactor arch_loss & flops_loss

* Update Dsnas & MMRAZOR_EVALUATOR:
1. finalized compute_loss & handle_grads in algorithm;
2. add MMRAZOR_EVALUATOR;
3. fix bugs.

* Update lr scheduler & fix a bug:
1. update param_scheduler & lr_scheduler for dsnas;
2. fix a bug of switching to finetune stage.

* remove old evaluators

* remove old evaluators

* update param_scheduler config

* merge dev-1.x into gy/estimator

* add flops_loss in Dsnas using ResourcesEstimator

* get resources before mutator.prepare_from_supernet

* delete unness broadcast api from gml

* broadcast spec_modules_resources when estimating

* update early fix mechanism for Dsnas

* fix merge

* update units in estimator

* minor change

* fix data_preprocessor api

* add flops_loss_coef

* remove DsnasOptimWrapper

* fix bn eps and data_preprocessor

* fix bn weight decay bug

* add betas for mutator optimizer

* set diff_rank_seed=True for dsnas

* fix start_factor of lr when warm up

* remove .module in non-ddp mode

* add GlobalAveragePoolingWithDropout

* add UT for dsnas

* remove unness channel adjustment for shufflenetv2

* update supernet configs

* delete unness dropout

* delete unness part with minor change on dsnas

* minor change on the flag of search stage

* update README and subnet configs

* add UT for OneHotMutableOP

* [Feature] Update train (open-mmlab#279)

* support auto resume

* add enable auto_scale_lr in train.py

* support '--amp' option

* [Fix] Fix darts metafile (open-mmlab#278)

fix darts metafile

* fix ci (open-mmlab#284)

* fix ci for circle ci

* fix bug in test_metafiles

* add  pr_stage_test for github ci

* add multiple version

* fix ut

* fix lint

* Temporarily skip dataset UT

* update github ci

* add github lint ci

* install wheel

* remove timm from requirements

* install wheel when test on windows

* fix error

* fix bug

* remove github windows ci

* fix device error of arch_params when DsnasDDP

* fix CRD dataset ut

* fix scope error

* rm test_cuda in workflows of github

* [Doc] fix typos in en/usr_guides

Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: pppppM <gjf_mail@126.com>
Co-authored-by: gaoyang07 <1546308416@qq.com>
Co-authored-by: huangpengsheng <huangpengsheng@sensetime.com>
Co-authored-by: SheffieldCao <1751899@tongji.edu.cn>

* fix bug when python=3.6

* fix lint

* fix bug when test using cpu only

* refine ci

* fix error in ci

* try ci

* update repr of Channel

* fix error

* mv init_from_predefined_model to MutableChannelUnit

* move tests

* update SquentialMutableChannel

* update l1 mutable channel unit

* add OneShotMutableChannel

* candidate_mode -> choice_mode

* update docstring

* change ci

Co-authored-by: P.Huang <37200926+FreakieHuang@users.noreply.github.com>
Co-authored-by: huangpengsheng <huangpengsheng@sensetime.com>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: humu789 <humu@pjlab.org.cn>
Co-authored-by: whcao <41630003+HIT-cwh@users.noreply.github.com>
Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: pppppM <gjf_mail@126.com>
Co-authored-by: gaoyang07 <1546308416@qq.com>
Co-authored-by: SheffieldCao <1751899@tongji.edu.cn>
@codecov
Copy link

codecov bot commented Oct 10, 2022

Codecov Report

Base: 82.02% // Head: 82.85% // Increases project coverage by +0.83% 🎉

Coverage data is based on head (bc3e038) compared to base (f98ac34).
Patch coverage: 89.09% of modified lines in pull request are covered.

Additional details and impacted files
@@             Coverage Diff             @@
##           dev-1.x     #312      +/-   ##
===========================================
+ Coverage    82.02%   82.85%   +0.83%     
===========================================
  Files          182      190       +8     
  Lines         7538     8331     +793     
  Branches      1150     1298     +148     
===========================================
+ Hits          6183     6903     +720     
- Misses        1132     1168      +36     
- Partials       223      260      +37     
Flag Coverage Δ
unittests 82.85% <89.09%> (+0.83%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmrazor/engine/runner/slimmable_val_loop.py 37.03% <0.00%> (ø)
...tectures/dynamic_ops/mixins/dynamic_conv_mixins.py 91.66% <ø> (ø)
...architectures/dynamic_ops/mixins/dynamic_mixins.py 88.41% <ø> (ø)
...mrazor/models/mutators/channel_mutator/__init__.py 100.00% <ø> (ø)
mmrazor/structures/graph/channel_graph.py 80.00% <80.00%> (ø)
mmrazor/structures/graph/channel_nodes.py 80.31% <80.31%> (ø)
...s/mutables/mutable_channel/base_mutable_channel.py 80.95% <80.95%> (ø)
...models/mutators/channel_mutator/channel_mutator.py 86.71% <86.06%> (-13.29%) ⬇️
...bles/mutable_channel/units/mutable_channel_unit.py 86.39% <86.39%> (ø)
mmrazor/structures/graph/channel_modules.py 86.51% <86.51%> (ø)
... and 40 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@sunnyxiaohu sunnyxiaohu self-requested a review October 10, 2022 09:02
@pppppM pppppM merged commit b4b7e24 into open-mmlab:dev-1.x Oct 10, 2022
@LKJacky LKJacky deleted the merge-pruning-pr branch October 11, 2022 09:56
humu789 pushed a commit to humu789/mmrazor that referenced this pull request Feb 13, 2023
* fix cub path

* use cmake source dir instead
humu789 pushed a commit to humu789/mmrazor that referenced this pull request Feb 13, 2023
* fix pose demo and windows build (open-mmlab#307)

* init

* Update nms_rotated.cpp

* add postprocessing_masks gpu version (open-mmlab#276)

* add postprocessing_masks gpu version

* default device cpu

* pre-commit fix

Co-authored-by: hadoop-basecv <hadoop-basecv@set-gh-basecv-serving-classify11.mt>

* fixed a bug causes text-recognizer to fail when (non-NULL) empty bboxes list is passed (open-mmlab#310)

* [Fix] include missing <type_traits> for formatter.h (open-mmlab#313)

* fix formatter

* relax GCC version requirement

* fix

* fix lint

* fix lint

* [Fix] MMEditing cannot save results when testing (open-mmlab#336)

* fix show

* lint

* remove redundant codes

* resolve comment

* type hint

* docs(build): fix typo (open-mmlab#352)

* docs(build): add missing build option

* docs(build): add onnx install

* style(doc): trim whitespace

* docs(build): revert install onnx

* docs(build): add ncnn LD_LIBRARY_PATH

* docs(build): fix path error

* fix openvino export tmp model, add binary flag (open-mmlab#353)

* init circleci (open-mmlab#348)

* fix wrong input mat type (open-mmlab#362)

* fix wrong input mat type

* fix lint

* fix(docs): remove redundant doc tree (open-mmlab#360)

* fix missing ncnn_DIR & InferenceEngine_DIR (open-mmlab#364)

* update doc

Co-authored-by: Chen Xin <xinchen.tju@gmail.com>
Co-authored-by: Shengxi Li <982783556@qq.com>
Co-authored-by: hadoop-basecv <hadoop-basecv@set-gh-basecv-serving-classify11.mt>
Co-authored-by: lzhangzz <lzhang329@gmail.com>
Co-authored-by: Yifan Zhou <singlezombie@163.com>
Co-authored-by: tpoisonooo <khj.application@aliyun.com>
Co-authored-by: lvhan028 <lvhan_028@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants