Skip to content

Commit

Permalink
Merge dev-1.x into quantize (open-mmlab#430)
Browse files Browse the repository at this point in the history
* Fix a bug in make_divisible. (open-mmlab#333)

fix bug in make_divisible

Co-authored-by: liukai <liukai@pjlab.org.cn>

* [Fix] Fix counter mapping bug (open-mmlab#331)

* fix counter mapping bug

* move judgment into get_counter_type & update UT

* [Docs]Add MMYOLO projects link (open-mmlab#334)

* [Doc] fix typos in en/usr_guides (open-mmlab#299)

* Update README.md

* Update README_zh-CN.md

Co-authored-by: Sheffield <49406546+SheffieldCao@users.noreply.github.com>

* [Features]Support `MethodInputsRecorder` and `FunctionInputsRecorder` (open-mmlab#320)

* support MethodInputsRecorder and FunctionInputsRecorder

* fix bugs that the model can not be pickled

* WIP: add pytest for ema model

* fix bugs in recorder and delivery when ema_hook is used

* don't register the DummyDataset

* fix pytest

* [Feature] Add deit-base (open-mmlab#332)

* WIP: support deit

* WIP: add deithead

* WIP: fix checkpoint hook

* fix data preprocessor

* fix cfg

* WIP: add readme

* reset single_teacher_distill

* add metafile

* add model to model-index

* fix configs and readme

* [Feature]Feature map visualization (open-mmlab#293)

* WIP: vis

* WIP: add visualization

* WIP: add visualization hook

* WIP: support razor visualizer

* WIP

* WIP: wrap draw_featmap

* support feature map visualization

* add a demo image for visualization

* fix typos

* change eps to 1e-6

* add pytest for visualization

* fix vis hook

* fix arguments' name

* fix img path

* support draw inference results

* add visualization doc

* fix figure url

* move files

Co-authored-by: weihan cao <HIT-cwh>

* [Feature] Add kd examples (open-mmlab#305)

* support kd for mbv2 and shufflenetv2

* WIP: fix ckpt path

* WIP: fix kd r34-r18

* add metafile

* fix metafile

* delete

* [Doc] add documents about pruning. (open-mmlab#313)

* init

* update user guide

* update images

* update

* update How to prune your model

* update how_to_use_config_tool_of_pruning.md

* update doc

* move location

* update

* update

* update

* add mutablechannels.md

* add references

Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: jacky <jacky@xx.com>

* [Feature] PyTorch version of `PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient`. (open-mmlab#304)

* add pkd

* add pytest for pkd

* fix cfg

* WIP: support fcos3d

* WIP: support fcos3d pkd

* support mmdet3d

* fix cfgs

* change eps to 1e-6 and add some comments

* fix docstring

* fix cfg

* add assert

* add type hint

* WIP: add readme and metafile

* fix readme

* update metafiles and readme

* fix metafile

* fix pipeline figure

* [Refactor] Refactor Mutables and Mutators (open-mmlab#324)

* refactor mutables

* update load fix subnet

* add DumpChosen Typehint

* adapt UTs

* fix lint

* Add GroupMixin to ChannelMutator (temporarily)

* fix type hints

* add GroupMixin doc-string

* modified by comments

* fix type hits

* update subnet format

* fix channel group bugs and add UTs

* fix doc string

* fix comments

* refactor diff module forward

* fix error in channel mutator doc

* fix comments

Co-authored-by: liukai <liukai@pjlab.org.cn>

* [Fix] Update readme (open-mmlab#341)

* update kl readme

* update dsnas readme

* fix url

* Bump version to 1.0.0rc1 (open-mmlab#338)

update version

* [Feature] Add Autoformer algorithm (open-mmlab#315)

* update candidates

* update subnet_sampler_loop

* update candidate

* add readme

* rename variable

* rename variable

* clean

* update

* add doc string

* Revert "[Improvement] Support for candidate multiple dimensional search constraints."

* [Improvement] Update Candidate with multi-dim search constraints. (open-mmlab#322)

* update doc

* add support type

* clean code

* update candidates

* clean

* xx

* set_resource -> set_score

* fix ci bug

* py36 lint

* fix bug

* fix check constrain

* py36 ci

* redesign candidate

* fix pre-commit

* update cfg

* add build_resource_estimator

* fix ci bug

* remove runner.epoch in testcase

* [Feature] Autoformer architecture and dynamicOPs (open-mmlab#327)

* add DynamicSequential

* dynamiclayernorm

* add dynamic_pathchembed

* add DynamicMultiheadAttention and DynamicRelativePosition2D

* add channel-level dynamicOP

* add autoformer algo

* clean notes

* adapt channel_mutator

* vit fly

* fix import

* mutable init

* remove annotation

* add DynamicInputResizer

* add unittest for mutables

* add OneShotMutableChannelUnit_VIT

* clean code

* reset unit for vit

* remove attr

* add autoformer backbone UT

* add valuemutator UT

* clean code

* add autoformer algo UT

* update classifier UT

* fix test error

* ignore

* make lint

* update

* fix lint

* mutable_attrs

* fix test

* fix error

* remove DynamicInputResizer

* fix test ci

* remove InputResizer

* rename variables

* modify type

* Continued improvements of ChannelUnit

* fix lint

* fix lint

* remove OneShotMutableChannelUnit

* adjust derived type

* combination mixins

* clean code

* fix sample subnet

* search loop fly

* more annotations

* avoid counter warning and modify batch_augment cfg by gy

* restore

* source_value_mutables restriction

* simply arch_setting api

* update

* clean

* fix ut

* [Feature] Add performance predictor (open-mmlab#306)

* add predictor with 4 handlers

* [Improvement] Update Candidate with multi-dim search constraints. (open-mmlab#322)

* update doc

* add support type

* clean code

* update candidates

* clean

* xx

* set_resource -> set_score

* fix ci bug

* py36 lint

* fix bug

* fix check constrain

* py36 ci

* redesign candidate

* fix pre-commit

* update cfg

* add build_resource_estimator

* fix ci bug

* remove runner.epoch in testcase

* update metric_predictor:
1. update MetricPredictor;
2. add predictor config for searching;
3. add predictor in evolution_search_loop.

* add UT for predictor

* add MLPHandler

* patch optional.txt for predictors

* patch test_evolution_search_loop

* refactor apis of predictor and handlers

* fix ut and remove predictor_cfg in predictor

* adapt new mutable & mutator design

* fix ut

* remove unness assert after rebase

* move predictor-build in __init__ & simplify estimator-build

Co-authored-by: Yue Sun <aptsunny@tongji.edu.cn>

* [Feature] Add DCFF (open-mmlab#295)

* add ChannelGroup (open-mmlab#250)

* rebase new dev-1.x

* modification for adding config_template

* add docstring to channel_group.py

* add docstring to mutable_channel_group.py

* rm channel_group_cfg from Graph2ChannelGroups

* change choice type of SequentialChannelGroup from float to int

* add a warning about group-wise conv

* restore __init__ of dynamic op

* in_channel_mutable  ->  mutable_in_channel

* rm abstractproperty

* add a comment about VT

* rm registry for ChannelGroup

* MUTABLECHANNELGROUP -> ChannelGroupType

* refine docstring of IndexDict

* update docstring

* update docstring

* is_prunable -> is_mutable

* update docstring

* fix error in pre-commit

* update unittest

* add return type

* unify init_xxx apit

* add unitest about init of MutableChannelGroup

* update according to reviews

* sequential_channel_group -> sequential_mutable_channel_group

Co-authored-by: liukai <liukai@pjlab.org.cn>

* Add BaseChannelMutator and refactor Autoslim (open-mmlab#289)

* add BaseChannelMutator

* add autoslim

* tmp

* make SequentialMutableChannelGroup accpeted both of num and ratio  as choice. and supports divisior

* update OneShotMutableChannelGroup

* pass supernet training of autoslim

* refine autoslim

* fix bug in OneShotMutableChannelGroup

* refactor make_divisible

* fix spell error:  channl -> channel

* init_using_backward_tracer -> init_from_backward_tracer
init_from_fx_tracer -> init_from_fx_tracer

* refine SequentialMutableChannelGroup

* let mutator support models with dynamicop

* support define search space in model

* tracer_cfg -> parse_cfg

* refine

* using -> from

* update docstring

* update docstring

Co-authored-by: liukai <liukai@pjlab.org.cn>

* tmpsave

* migrate ut

* tmpsave2

* add loss collector

* refactor slimmable and add l1-norm (open-mmlab#291)

* refactor slimmable and add l1-norm

* make l1-norm support convnd

* update get_channel_groups

* add  l1-norm_resnet34_8xb32_in1k.py

* add pretrained to resnet34-l1

* remove old channel mutator

* BaseChannelMutator -> ChannelMutator

* update according to reviews

* add readme to l1-norm

* MBV2_slimmable -> MBV2_slimmable_config

Co-authored-by: liukai <liukai@pjlab.org.cn>

* update config

* fix md & pytorch support <1.9.0 in batchnorm init

* Clean old codes. (open-mmlab#296)

* remove old dynamic ops

* move dynamic ops

* clean old mutable_channels

* rm OneShotMutableChannel

* rm MutableChannel

* refine

* refine

* use SquentialMutableChannel to replace OneshotMutableChannel

* refactor dynamicops folder

* let SquentialMutableChannel support float

Co-authored-by: liukai <liukai@pjlab.org.cn>

* fix ci

* ci fix py3.6.x & add mmpose

* ci fix py3.6.9 in utils/index_dict.py

* fix mmpose

* minimum_version_cpu=3.7

* fix ci 3.7.13

* fix pruning &meta ci

* support python3.6.9

* fix py3.6 import caused by circular import patch in py3.7

* fix py3.6.9

* Add channel-flow (open-mmlab#301)

* base_channel_mutator -> channel_mutator

* init

* update docstring

* allow omitting redundant configs for channel

* add register_mutable_channel_to_a_module to MutableChannelContainer

* update according to reviews 1

* update according to reviews 2

* update according to reviews 3

* remove old docstring

* fix error

* using->from

* update according to reviews

* support self-define input channel number

* update docstring

* chanenl -> channel_elem

Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: jacky <jacky@xx.com>

* support >=3.7

* support py3.6.9

* Rename: ChannelGroup -> ChannelUnit (open-mmlab#302)

* refine repr of MutableChannelGroup

* rename folder name

* ChannelGroup -> ChannelUnit

* filename in units folder

* channel_group -> channel_unit

* groups -> units

* group -> unit

* update

* get_mutable_channel_groups -> get_mutable_channel_units

* fix bug

* refine docstring

* fix ci

* fix bug in tracer

Co-authored-by: liukai <liukai@pjlab.org.cn>

* update new channel config format

* update pruning refactor

* update merged pruning

* update commit

* fix dynamic_conv_mixin

* update comments: readme&dynamic_conv_mixins.py

* update readme

* move kl softmax channel pooling to op by comments

* fix comments: fix redundant & split README.md

* dcff in ItePruneAlgorithm

* partial dynamic params for fuseconv

* add step_freq & prune_time check

* update comments

* update comments

* update comments

* fix ut

* fix gpu ut & revise step_freq in ItePruneAlgorithm

* update readme

* revise ItePruneAlgorithm

* fix docs

* fix dynamic_conv attr

* fix ci

Co-authored-by: LKJacky <108643365+LKJacky@users.noreply.github.com>
Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: zengyi.vendor <zengyi.vendor@sensetime.com>
Co-authored-by: jacky <jacky@xx.com>

* [Fix] Fix optional requirements (open-mmlab#357)

* fix optional requirements

* fix dcff ut

* fix import with get_placeholder

* supplement the previous commit

* [Fix] Fix configs of wrn models and ofd. (open-mmlab#361)

* 1.revise the configs of wrn22, wrn24, and wrn40. 2.revise the data_preprocessor of ofd_backbone_resnet50_resnet18_8xb16_cifar10

* 1.Add README for vanilla-wrm.

* 1.Revise readme of wrn

Co-authored-by: zhangzhongyu <zhangzhongyu@pjlab.org.cn>

* [Fix] Fix bug on mmrazor visualization, mismatch argument in define and use. (open-mmlab#356)

fix bug on mmrazor visualization, mismatch argument in define and use.

Co-authored-by: Xianpan Zhou <32625100+PanDaMeow@users.noreply.github.com>

* fix bug in benchmark_test (open-mmlab#364)

fix bug in configs

Co-authored-by: Your Name <you@example.com>

* [FIX] Fix wrn configs (open-mmlab#368)

* fix wrn configs

* fix wrn configs

* update online wrn model weight

* [Fix] fix bug on pkd config. Wrong import filename. (open-mmlab#373)

* [CI] Update ci to torch1.13 (open-mmlab#380)

update ci to torch1.13

* [Feature] Add BigNAS algorithm (open-mmlab#219)

* add calibrate-bn-statistics

* add test calibrate-bn-statistics

* fix mixins

* fix mixins

* fix mixin tests

* remove slimmable channel mutable and refactor dynamic op

* refact dynamic batch norm

* add progressive dynamic conv2d

* add center crop dynamic conv2d

* refactor dynamic directory

* refactor dynamic sequential

* rename length to depth in dynamic sequential

* add test for derived mutable

* refactor dynamic op

* refactor api of dynamic op

* add derive mutable mixin

* addbignas algorithm

* refactor bignas structure

* add input resizer

* add input resizer to bignas

* move input resizer from algorithm into classifier

* remove compnents

* add attentive mobilenet

* delete json file

* nearly(less 0.2) align inference accuracy with gml

* move mutate seperated in bignas mobilenet backbone

* add zero_init_residual

* add set_dropout

* set dropout in bignas algorithm

* fix registry

* add subnet yaml and nearly align inference accuracy with gml

* add rsb config for bignas

* remove base in config

* add gml bignas config

* convert to iter based

* bignas forward and backward fly

* fix merge conflict

* fix dynamicseq bug

* fix bug and refactor bignas

* arrange configs of bignas

* fix typo

* refactor attentive_mobilenet

* fix channel mismatch due to registion of DerivedMutable

* update bignas & fix se channel mismatch

* add AutoAugmentV2 & remove unness configs

* fix lint

* recover channel assertion in channel unit

* fix a group bug

* fix comments

* add docstring

* add norm in dynamic_embed

* fix search loop & other minor changes

* fix se expansion

* minor change

* add ut for bignas & attentive_mobilenet

* fix ut

* update bignas readme

* rm unness ut & supplement get_placeholder

* fix lint

* fix ut

* add subnet deployment in downstream tasks.

* minor change

* update ofa backbone

* minor fix

* Continued improvements of searchable backbone

* minor change

* drop ratio in backbone

* fix comments

* fix ci test

* fix test

* add dynamic shortcut UT

* modify strategy to fit bignas

* fix test

* fix bug in neck

* fix error

* fix error

* fix yaml

* save subnet ckpt

* merge autoslim_val/test_loop into subnet_val_loop

* move calibrate_bn_mixin to utils

* fix bugs and add docstring

* clean code

* fix register bug

* clean code

* update

Co-authored-by: wangshiguang <wangshiguang@sensetime.com>
Co-authored-by: gaoyang07 <1546308416@qq.com>
Co-authored-by: aptsunny <aptsunny@tongji.edu.cn>
Co-authored-by: sunyue1 <sunyue1@sensetime.com>

* [Bug] Fix ckpt (open-mmlab#372)

fix ckpt

* [Feature] Add tools to convert distill ckpt to student-only ckpt. (open-mmlab#381)

* [Feature] Add tools to convert distill ckpt to student-only ckpt.

* fix bug.

* add --model-only to only save model.

* Make changes accroding to PR review.

* Enhance the Abilities of the Tracer for Pruning. (open-mmlab#371)

* tmp

* add new mmdet models

* add docstring

* pass test and pre-commit

* rm razor tracer

* update fx tracer, now it can automatically wrap methods and functions.

* update tracer passed models

* add warning for torch <1.12.0

fix bug for python3.6

update placeholder to support placeholder.XXX

* fix bug

* update docs

* fix lint

* fix parse_cfg in configs

* restore mutablechannel

* test ite prune algorithm when using dist

* add get_model_from_path to MMModelLibrrary

* add mm models to DefaultModelLibrary

* add uts

* fix bug

* fix bug

* add uts

* add uts

* add uts

* add uts

* fix bug

* restore ite_prune_algorithm

* update doc

* PruneTracer -> ChannelAnalyzer

* prune_tracer -> channel_analyzer

* add test for fxtracer

* fix bug

* fix bug

* PruneTracer -> ChannelAnalyzer

refine

* CustomFxTracer -> MMFxTracer

* fix bug when test with torch<1.12

* update print log

* fix lint

* rm unuseful code

Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: jacky <jacky@xx.com>
Co-authored-by: Your Name <you@example.com>
Co-authored-by: liukai <your_email@abc.example>

* fix bug in placer holder (open-mmlab#395)

* fix bug in placer holder

* remove redundent comment

Co-authored-by: liukai <your_email@abc.example>

* Add get_prune_config and a demo config_pruning (open-mmlab#389)

* update tools and test

* add demo

* disable test doc

* add switch for test tools and test_doc

* fix bug

* update doc

* update tools name

* mv get_channel_units

Co-authored-by: liukai <your_email@abc.example>

* [Improvement] Adapt OFA series with SearchableMobileNetV3 (open-mmlab#385)

* fix mutable bug in AttentiveMobileNetV3

* remove unness code

* update ATTENTIVE_SUBNET_A0-A6.yaml with optimized names

* unify the sampling usage in sandwich_rule-based NAS

* use alias to export subnet

* update OFA configs

* fix attr bug

* fix comments

* update convert_supernet2subnet.py

* correct the way to dump DerivedMutable

* fix convert index bug

* update OFA configs & models

* fix dynamic2static

* generalize convert_ofa_ckpt.py

* update input_resizer

* update README.md

* fix ut

* update export_fix_subnet

* update _dynamic_to_static

* update fix_subnet UT & minor fix bugs

* fix ut

* add new autoaug compared to attentivenas

* clean

* fix act

* fix act_cfg

* update fix_subnet

* fix lint

* add docstring

Co-authored-by: gaoyang07 <1546308416@qq.com>
Co-authored-by: aptsunny <aptsunny@tongji.edu.cn>

* [Fix]Dcff Deploy Revision (open-mmlab#383)

* dcff deploy revision

* tempsave

* update fix_subnet

* update mutator load

* export/load_fix_subnet revision for mutator

* update fix_subnet with dev-1.x

* update comments

* update docs

* update registry

* [Fix] Fix commands in README to adapt branch 1.x (open-mmlab#400)

* update commands in README for 1.x

* fix commands

Co-authored-by: gaoyang07 <1546308416@qq.com>

* Set requires_grad to False if the teacher is not trainable (open-mmlab#398)

* add choice and mask of units to checkpoint (open-mmlab#397)

* add choice and mask of units to checkpoint

* update

* fix bug

* remove device operation

* fix bug

* fix circle ci error

* fix error in numpy for circle ci

* fix bug in requirements

* restore

* add a note

* a new solution

* save mutable_channel.mask as float for dist training

* refine

* mv meta file test

Co-authored-by: liukai <your_email@abc.example>
Co-authored-by: jacky <jacky@xx.com>

* [Bug]Fix fpn teacher distill (open-mmlab#388)

fix fpn distill

* [CodeCamp open-mmlab#122] Support KD algorithm MGD for detection. (open-mmlab#377)

* [Feature] Support KD algorithm MGD for detection.

* use connector to beauty mgd.

* fix typo, add unitest.

* fix mgd loss unitest.

* fix mgd connector unitest.

* add model pth and log file.

* add mAP.

* update l1 config (open-mmlab#405)

* add l1 config

* update l1 config

Co-authored-by: jacky <jacky@xx.com>

* [Feature] Add greedy search for AutoSlim (open-mmlab#336)

* WIP: add greedysearch

* fix greedy search and add bn_training_mode to autoslim

* fix cfg files

* fix autoslim configs

* fix bugs when converting dynamic bn to static bn

* change to test loop

* refactor greedy search

* rebase and fix greedysearch

* fix lint

* fix and delete useless codes

* fix pytest

* fix pytest and add bn_training_mode

* fix lint

* add reference to AutoSlimGreedySearchLoop's docstring

* sort candidate_choices

* fix save subnet

* delete useless codes in channel container

* change files' name: convert greedy_search_loop to autoslim_greedy_search_loop

* [Fix] Fix metafile (open-mmlab#422)

* fix ckpt path in metafile and readme

* fix darts file path

* fix docstring in ConfigurableDistiller

* fix darts

* fix error

* add darts of mmrazor version

* delete py36

Co-authored-by: liukai <your_email@abc.example>

* update bignas cfg (open-mmlab#412)

* check attentivenas training

* update ckpt link

* update supernet log

Co-authored-by: aptsunny <aptsunny@tongji.edu.cn>

* Bump version to 1.0.0rc2 (open-mmlab#423)

bump version to 1.0.0rc2

Co-authored-by: liukai <your_email@abc.example>

* fix lint

* fix ci

* add tmp docstring for passed ci

* add tmp docstring for passed ci

* fix ci

* add get_placeholder for quant

* add skip for unittest

* fix package placeholder bug

* add version judgement in __init__

* update prev commit

* update prev commit

* update prev commit

* update prev commit

* update prev commit

* update prev commit

* update prev commit

* update prev commit

* update prev commit

Co-authored-by: LKJacky <108643365+LKJacky@users.noreply.github.com>
Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: kitecats <90194592+kitecats@users.noreply.github.com>
Co-authored-by: Sheffield <49406546+SheffieldCao@users.noreply.github.com>
Co-authored-by: whcao <41630003+HIT-cwh@users.noreply.github.com>
Co-authored-by: jacky <jacky@xx.com>
Co-authored-by: pppppM <67539920+pppppM@users.noreply.github.com>
Co-authored-by: Yue Sun <aptsunny@tongji.edu.cn>
Co-authored-by: zengyi <31244134+spynccat@users.noreply.github.com>
Co-authored-by: zengyi.vendor <zengyi.vendor@sensetime.com>
Co-authored-by: zhongyu zhang <43191879+wilxy@users.noreply.github.com>
Co-authored-by: zhangzhongyu <zhangzhongyu@pjlab.org.cn>
Co-authored-by: Xianpan Zhou <32625100+TinyTigerPan@users.noreply.github.com>
Co-authored-by: Xianpan Zhou <32625100+PanDaMeow@users.noreply.github.com>
Co-authored-by: Your Name <you@example.com>
Co-authored-by: P.Huang <37200926+FreakieHuang@users.noreply.github.com>
Co-authored-by: qiufeng <44188071+wutongshenqiu@users.noreply.github.com>
Co-authored-by: wangshiguang <wangshiguang@sensetime.com>
Co-authored-by: gaoyang07 <1546308416@qq.com>
Co-authored-by: sunyue1 <sunyue1@sensetime.com>
Co-authored-by: liukai <your_email@abc.example>
Co-authored-by: Ming-Hsuan-Tu <qrnnis2623891@gmail.com>
Co-authored-by: Yivona <120088893+yivona08@users.noreply.github.com>
Co-authored-by: Yue Sun <aptsunny@alumni.tongji.edu.cn>
  • Loading branch information
26 people authored and humu789 committed Apr 11, 2023
1 parent f47d49a commit a725b51
Show file tree
Hide file tree
Showing 41 changed files with 838 additions and 305 deletions.
38 changes: 38 additions & 0 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,44 @@ jobs:
python-version: [3.7]
torch: [1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.13.0]
include:
- torch: 1.6.0
torch_version: 1.6
torchvision: 0.7.0
- torch: 1.7.0
torch_version: 1.7
torchvision: 0.8.1
- torch: 1.7.0
torch_version: 1.7
torchvision: 0.8.1
python-version: 3.8
- torch: 1.8.0
torch_version: 1.8
torchvision: 0.9.0
- torch: 1.8.0
torch_version: 1.8
torchvision: 0.9.0
python-version: 3.8
- torch: 1.9.0
torch_version: 1.9
torchvision: 0.10.0
- torch: 1.9.0
torch_version: 1.9
torchvision: 0.10.0
python-version: 3.8
- torch: 1.10.0
torch_version: 1.10
torchvision: 0.11.0
- torch: 1.10.0
torch_version: 1.10
torchvision: 0.11.0
python-version: 3.8
- torch: 1.11.0
torch_version: 1.11
torchvision: 0.12.0
- torch: 1.11.0
torch_version: 1.11
torchvision: 0.12.0
python-version: 3.8
- torch: 1.12.0
torch_version: 1.12
torchvision: 0.13.0
Expand Down
4 changes: 4 additions & 0 deletions configs/pruning/mmpose/dcff/fix_subnet.json
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,11 @@
"min_value":1,
"min_ratio":0.9
},
<<<<<<< HEAD
"choice":0.59375
=======
"choice":0.59374
>>>>>>> 985a611e (Merge dev-1.x into quantize (#430))
},
"backbone.layer2.1.conv1_(0, 128)_128":{
"init_args":{
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,11 @@
_base_ = ['dcff_pointrend_resnet50_8xb2_cityscapes.py']

# model settings
<<<<<<< HEAD
_base_.model = dict(
=======
model_cfg = dict(
>>>>>>> 985a611e (Merge dev-1.x into quantize (#430))
_scope_='mmrazor',
type='sub_model',
cfg=_base_.architecture,
Expand Down
9 changes: 4 additions & 5 deletions mmrazor/engine/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,14 @@
from .optimizers import SeparateOptimWrapperConstructor
from .runner import (AutoSlimGreedySearchLoop, DartsEpochBasedTrainLoop,
DartsIterBasedTrainLoop, EvolutionSearchLoop,
GreedySamplerTrainLoop, SelfDistillValLoop,
SingleTeacherDistillValLoop, SlimmableValLoop,
SubnetValLoop)
GreedySamplerTrainLoop, PTQLoop, QATEpochBasedLoop,
SelfDistillValLoop, SingleTeacherDistillValLoop,
SlimmableValLoop, SubnetValLoop)

__all__ = [
'SeparateOptimWrapperConstructor', 'DumpSubnetHook',
'SingleTeacherDistillValLoop', 'DartsEpochBasedTrainLoop',
'DartsIterBasedTrainLoop', 'SlimmableValLoop', 'EvolutionSearchLoop',
'GreedySamplerTrainLoop', 'EstimateResourcesHook', 'SelfDistillValLoop',
'AutoSlimGreedySearchLoop', 'SubnetValLoop', 'StopDistillHook',
'DMCPSubnetHook'
'AutoSlimGreedySearchLoop', 'SubnetValLoop', 'PTQLoop', 'QATEpochBasedLoop'
]
4 changes: 2 additions & 2 deletions mmrazor/engine/runner/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,6 @@
'SingleTeacherDistillValLoop', 'DartsEpochBasedTrainLoop',
'DartsIterBasedTrainLoop', 'SlimmableValLoop', 'EvolutionSearchLoop',
'GreedySamplerTrainLoop', 'SubnetValLoop', 'SelfDistillValLoop',
'ItePruneValLoop', 'AutoSlimGreedySearchLoop', 'PTQLoop',
'QATEpochBasedLoop'
'ItePruneValLoop', 'AutoSlimGreedySearchLoop', 'QATEpochBasedLoop',
'PTQLoop'
]
1 change: 0 additions & 1 deletion mmrazor/engine/runner/iteprune_val_loop.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,6 @@ def _save_fix_subnet(self):
file.write(fix_subnet)
torch.save({'state_dict': static_model.state_dict()},
osp.join(self.runner.work_dir, weight_name))

self.runner.logger.info(
'export finished and '
f'{subnet_name}, '
Expand Down
15 changes: 12 additions & 3 deletions mmrazor/engine/runner/quantization_loops.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,18 @@
import torch
from mmengine.evaluator import Evaluator
from mmengine.runner import EpochBasedTrainLoop, TestLoop, ValLoop
from torch.ao.quantization import (disable_observer, enable_fake_quant,
enable_observer)
from torch.nn.intrinsic.qat import freeze_bn_stats

try:
from torch.ao.quantization import (disable_observer, enable_fake_quant,
enable_observer)
from torch.nn.intrinsic.qat import freeze_bn_stats
except ImportError:
from mmrazor.utils import get_placeholder
disable_observer = get_placeholder('torch>=1.13')
enable_fake_quant = get_placeholder('torch>=1.13')
enable_observer = get_placeholder('torch>=1.13')
freeze_bn_stats = get_placeholder('torch>=1.13')

from torch.utils.data import DataLoader

from mmrazor.registry import LOOPS
Expand Down
2 changes: 2 additions & 0 deletions mmrazor/models/algorithms/nas/autoslim.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,8 @@ def __init__(self,
self._optim_wrapper_count_status_reinitialized = False
self.norm_training = norm_training

self.bn_training_mode = bn_training_mode

def _build_mutator(self,
mutator: VALID_MUTATOR_TYPE = None) -> ChannelMutator:
"""Build mutator."""
Expand Down
4 changes: 4 additions & 0 deletions mmrazor/models/algorithms/pruning/ite_prune_algorithm.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
from mmrazor.models.mutables import MutableChannelUnit
from mmrazor.models.mutators import ChannelMutator
from mmrazor.registry import MODELS
from mmrazor.utils import ValidFixMutable
from ..base import BaseAlgorithm

LossResults = Dict[str, torch.Tensor]
Expand Down Expand Up @@ -97,6 +98,8 @@ class ItePruneAlgorithm(BaseAlgorithm):
mutator_cfg (Union[Dict, ChannelMutator], optional): The config
of a mutator. Defaults to dict( type='ChannelMutator',
channel_unit_cfg=dict( type='SequentialMutableChannelUnit')).
fix_subnet (str | dict | :obj:`FixSubnet`): The path of yaml file or
loaded dict or built :obj:`FixSubnet`. Defaults to None.
data_preprocessor (Optional[Union[Dict, nn.Module]], optional):
Defaults to None.
target_pruning_ratio (dict, optional): The prune-target. The template
Expand All @@ -118,6 +121,7 @@ def __init__(self,
type='ChannelMutator',
channel_unit_cfg=dict(
type='SequentialMutableChannelUnit')),
fix_subnet: Optional[ValidFixMutable] = None,
data_preprocessor: Optional[Union[Dict, nn.Module]] = None,
target_pruning_ratio: Optional[Dict[str, float]] = None,
step_freq=1,
Expand Down
9 changes: 7 additions & 2 deletions mmrazor/models/algorithms/quantization/mm_architecture.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,17 @@
from mmengine.runner import load_checkpoint
from mmengine.structures import BaseDataElement
from torch import nn
from torch.ao.quantization import FakeQuantizeBase

from mmrazor.models.task_modules import build_graphmodule
from mmrazor.models.task_modules.tracer import build_graphmodule
from mmrazor.registry import MODEL_WRAPPERS, MODELS
from ..base import BaseAlgorithm

try:
from torch.ao.quantization import FakeQuantizeBase
except ImportError:
from mmrazor.utils import get_placeholder
FakeQuantizeBase = get_placeholder('torch>=1.13')

LossResults = Dict[str, torch.Tensor]
TensorResults = Union[Tuple[torch.Tensor], torch.Tensor]
PredictResults = List[BaseDataElement]
Expand Down
6 changes: 5 additions & 1 deletion mmrazor/models/fake_quants/base.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
# Copyright (c) OpenMMLab. All rights reserved.
from torch.ao.quantization import FakeQuantize
try:
from torch.ao.quantization import FakeQuantize
except ImportError:
from mmrazor.utils import get_placeholder
FakeQuantize = get_placeholder('torch>=1.13')

BaseFakeQuantize = FakeQuantize
8 changes: 6 additions & 2 deletions mmrazor/models/fake_quants/torch_fake_quants.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,14 @@
import inspect
from typing import List

import torch.ao.quantization.fake_quantize as torch_fake_quant_src

from mmrazor.registry import MODELS

try:
import torch.ao.quantization.fake_quantize as torch_fake_quant_src
except ImportError:
from mmrazor.utils import get_package_placeholder
torch_fake_quant_src = get_package_placeholder('torch>=1.13')


def register_torch_fake_quants() -> List[str]:
"""Register fake_quants in ``torch.ao.quantization.fake_quantize`` to the
Expand Down
1 change: 0 additions & 1 deletion mmrazor/models/losses/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# Copyright (c) OpenMMLab. All rights reserved.
from .ab_loss import ABLoss
from .adaround_loss import AdaRoundLoss
from .at_loss import ATLoss
from .crd_loss import CRDLoss
from .cross_entropy_loss import CrossEntropyLoss
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,13 @@

from mmrazor.models.mutables import OneShotMutableChannelUnit
from mmrazor.registry import MODELS
from ..group_mixin import DynamicSampleMixin
from .channel_mutator import ChannelMutator, ChannelUnitType


@MODELS.register_module()
class OneShotChannelMutator(ChannelMutator[OneShotMutableChannelUnit]):
class OneShotChannelMutator(ChannelMutator[OneShotMutableChannelUnit],
DynamicSampleMixin):
"""OneShotChannelMutator based on ChannelMutator. It use
OneShotMutableChannelUnit by default.
Expand Down
68 changes: 68 additions & 0 deletions mmrazor/models/mutators/group_mixin.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,11 @@
from mmrazor.models.mutables.mutable_module import MutableModule
from .base_mutator import MUTABLE_TYPE

if sys.version_info < (3, 8):
from typing_extensions import Protocol
else:
from typing import Protocol


class GroupMixin():
"""A mixin for :class:`BaseMutator`, which can group mutables by
Expand Down Expand Up @@ -259,3 +264,66 @@ def _check_valid_groups(self, alias2mutable_names: Dict[str, List[str]],
f'When a mutable is set alias attribute :{alias_key},'
f'the corresponding module name {mutable_name} should '
f'not be used in `custom_group` {custom_group}.')


class MutatorProtocol(Protocol): # pragma: no cover

@property
def mutable_class_type(self) -> Type[BaseMutable]:
...

@property
def search_groups(self) -> Dict:
...


class OneShotSampleMixin:
"""Sample mixin for one-shot mutators."""

def sample_choices(self: MutatorProtocol) -> Dict:
"""Sample choices for each group in search_groups."""
random_choices = dict()
for group_id, modules in self.search_groups.items():
random_choices[group_id] = modules[0].sample_choice()

return random_choices

def set_choices(self: MutatorProtocol, choices: Dict) -> None:
"""Set choices for each group in search_groups."""
for group_id, modules in self.search_groups.items():
choice = choices[group_id]
for module in modules:
module.current_choice = choice


class DynamicSampleMixin(OneShotSampleMixin):

def sample_choices(self: MutatorProtocol, kind: str = 'random') -> Dict:
"""Sample choices for each group in search_groups."""
random_choices = dict()
for group_id, modules in self.search_groups.items():
if kind == 'max':
random_choices[group_id] = modules[0].max_choice
elif kind == 'min':
random_choices[group_id] = modules[0].min_choice
else:
random_choices[group_id] = modules[0].sample_choice()
return random_choices

@property
def max_choice(self: MutatorProtocol) -> Dict:
"""Get max choices for each group in search_groups."""
max_choice = dict()
for group_id, modules in self.search_groups.items():
max_choice[group_id] = modules[0].max_choice

return max_choice

@property
def min_choice(self: MutatorProtocol) -> Dict:
"""Get min choices for each group in search_groups."""
min_choice = dict()
for group_id, modules in self.search_groups.items():
min_choice[group_id] = modules[0].min_choice

return min_choice
5 changes: 5 additions & 0 deletions mmrazor/models/mutators/value_mutator/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Copyright (c) OpenMMLab. All rights reserved.
from .dynamic_value_mutator import DynamicValueMutator
from .value_mutator import ValueMutator

__all__ = ['ValueMutator', 'DynamicValueMutator']
14 changes: 14 additions & 0 deletions mmrazor/models/mutators/value_mutator/dynamic_value_mutator.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Copyright (c) OpenMMLab. All rights reserved.
from mmrazor.models.mutables import OneShotMutableValue
from mmrazor.registry import MODELS
from ..group_mixin import DynamicSampleMixin
from .value_mutator import ValueMutator


@MODELS.register_module()
class DynamicValueMutator(ValueMutator, DynamicSampleMixin):
"""Dynamic value mutator with type as `OneShotMutableValue`."""

@property
def mutable_class_type(self):
return OneShotMutableValue
Loading

0 comments on commit a725b51

Please sign in to comment.