Skip to content

Commit

Permalink
[FEATURE] add quant algo Learned Step Size Quantization (open-mmlab…
Browse files Browse the repository at this point in the history
…#346)

* update

* Fix a bug in make_divisible. (open-mmlab#333)

fix bug in make_divisible

Co-authored-by: liukai <liukai@pjlab.org.cn>

* [Fix] Fix counter mapping bug (open-mmlab#331)

* fix counter mapping bug

* move judgment into get_counter_type & update UT

* [Docs]Add MMYOLO projects link (open-mmlab#334)

* [Doc] fix typos in en/usr_guides (open-mmlab#299)

* Update README.md

* Update README_zh-CN.md

Co-authored-by: Sheffield <49406546+SheffieldCao@users.noreply.github.com>

* [Features]Support `MethodInputsRecorder` and `FunctionInputsRecorder` (open-mmlab#320)

* support MethodInputsRecorder and FunctionInputsRecorder

* fix bugs that the model can not be pickled

* WIP: add pytest for ema model

* fix bugs in recorder and delivery when ema_hook is used

* don't register the DummyDataset

* fix pytest

* updated

* retina loss & predict & tesnor DONE

* [Feature] Add deit-base (open-mmlab#332)

* WIP: support deit

* WIP: add deithead

* WIP: fix checkpoint hook

* fix data preprocessor

* fix cfg

* WIP: add readme

* reset single_teacher_distill

* add metafile

* add model to model-index

* fix configs and readme

* [Feature]Feature map visualization (open-mmlab#293)

* WIP: vis

* WIP: add visualization

* WIP: add visualization hook

* WIP: support razor visualizer

* WIP

* WIP: wrap draw_featmap

* support feature map visualization

* add a demo image for visualization

* fix typos

* change eps to 1e-6

* add pytest for visualization

* fix vis hook

* fix arguments' name

* fix img path

* support draw inference results

* add visualization doc

* fix figure url

* move files

Co-authored-by: weihan cao <HIT-cwh>

* [Feature] Add kd examples (open-mmlab#305)

* support kd for mbv2 and shufflenetv2

* WIP: fix ckpt path

* WIP: fix kd r34-r18

* add metafile

* fix metafile

* delete

* [Doc] add documents about pruning. (open-mmlab#313)

* init

* update user guide

* update images

* update

* update How to prune your model

* update how_to_use_config_tool_of_pruning.md

* update doc

* move location

* update

* update

* update

* add mutablechannels.md

* add references

Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: jacky <jacky@xx.com>

* [Feature] PyTorch version of `PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient`. (open-mmlab#304)

* add pkd

* add pytest for pkd

* fix cfg

* WIP: support fcos3d

* WIP: support fcos3d pkd

* support mmdet3d

* fix cfgs

* change eps to 1e-6 and add some comments

* fix docstring

* fix cfg

* add assert

* add type hint

* WIP: add readme and metafile

* fix readme

* update metafiles and readme

* fix metafile

* fix pipeline figure

* for RFC

* Customed FX initialize

* add UT init

* [Refactor] Refactor Mutables and Mutators (open-mmlab#324)

* refactor mutables

* update load fix subnet

* add DumpChosen Typehint

* adapt UTs

* fix lint

* Add GroupMixin to ChannelMutator (temporarily)

* fix type hints

* add GroupMixin doc-string

* modified by comments

* fix type hits

* update subnet format

* fix channel group bugs and add UTs

* fix doc string

* fix comments

* refactor diff module forward

* fix error in channel mutator doc

* fix comments

Co-authored-by: liukai <liukai@pjlab.org.cn>

* [Fix] Update readme (open-mmlab#341)

* update kl readme

* update dsnas readme

* fix url

* Bump version to 1.0.0rc1 (open-mmlab#338)

update version

* init demo

* add customer_tracer

* add quantizer

* add fake_quant, loop, config

* remove CPatcher in custome_tracer

* demo_try

* init version

* modified base.py

* pre-rebase

* wip of adaround series

* adaround experiment

* trasfer to s2

* update api

* point at sub_reconstruction

* pre-checkout

* export onnx

* add customtracer

* fix lint

* move custom tracer

* fix import

* TDO: UTs

* Successfully RUN

* update loop

* update loop docstrings

* update quantizer docstrings

* update qscheme docstrings

* update qobserver docstrings

* update tracer docstrings

* update UTs init

* update UTs init

* fix review comments

* fix CI

* fix UTs

* update torch requirements

Co-authored-by: huangpengsheng <huangpengsheng@sensetime.com>
Co-authored-by: LKJacky <108643365+LKJacky@users.noreply.github.com>
Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: kitecats <90194592+kitecats@users.noreply.github.com>
Co-authored-by: Sheffield <49406546+SheffieldCao@users.noreply.github.com>
Co-authored-by: whcao <41630003+HIT-cwh@users.noreply.github.com>
Co-authored-by: jacky <jacky@xx.com>
Co-authored-by: pppppM <67539920+pppppM@users.noreply.github.com>
Co-authored-by: humu789 <humu@pjlab.org.cn>
  • Loading branch information
11 people committed Jan 9, 2023
1 parent 67da3ad commit ffb8247
Show file tree
Hide file tree
Showing 54 changed files with 2,910 additions and 61 deletions.
38 changes: 0 additions & 38 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,44 +31,6 @@ jobs:
python-version: [3.7]
torch: [1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.13.0]
include:
- torch: 1.6.0
torch_version: 1.6
torchvision: 0.7.0
- torch: 1.7.0
torch_version: 1.7
torchvision: 0.8.1
- torch: 1.7.0
torch_version: 1.7
torchvision: 0.8.1
python-version: 3.8
- torch: 1.8.0
torch_version: 1.8
torchvision: 0.9.0
- torch: 1.8.0
torch_version: 1.8
torchvision: 0.9.0
python-version: 3.8
- torch: 1.9.0
torch_version: 1.9
torchvision: 0.10.0
- torch: 1.9.0
torch_version: 1.9
torchvision: 0.10.0
python-version: 3.8
- torch: 1.10.0
torch_version: 1.10
torchvision: 0.11.0
- torch: 1.10.0
torch_version: 1.10
torchvision: 0.11.0
python-version: 3.8
- torch: 1.11.0
torch_version: 1.11
torchvision: 0.12.0
- torch: 1.11.0
torch_version: 1.11
torchvision: 0.12.0
python-version: 3.8
- torch: 1.12.0
torch_version: 1.12
torchvision: 0.13.0
Expand Down
47 changes: 47 additions & 0 deletions configs/quantization/ptq/adaround.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
_base_ = ['mmcls::resnet/resnet18_8xb32_in1k.py']

test_cfg = dict(
_delete_=True,
type='mmrazor.PTQLoop',
dataloader=_base_.test_dataloader,
evaluator=_base_.test_evaluator,
calibrate_dataloader=_base_.train_dataloader,
batch_num=32,
# reconstruction_cfg=dict(
# pattern='layer',
# loss=dict(
# type='mmrazor.AdaRoundLoss',
# iters=20000
# )
# )
)

model = dict(
_delete_=True,
type='mmrazor.GeneralQuant',
architecture=_base_.model,
quantizer=dict(
type='mmrazor.CustomQuantizer',
is_qat=False,
skipped_methods=[
'mmcls.models.heads.ClsHead._get_loss',
'mmcls.models.heads.ClsHead._get_predictions'
],
qconfig=dict(
qtype='affine',
w_observer=dict(type='mmrazor.MSEObserver'),
a_observer=dict(type='mmrazor.EMAMSEObserver'),
w_fake_quant=dict(type='mmrazor.AdaRoundFakeQuantize'),
a_fake_quant=dict(type='mmrazor.FakeQuantize'),
w_qscheme=dict(
bit=2,
is_symmetry=False,
is_per_channel=True,
is_pot_scale=False,
),
a_qscheme=dict(
bit=4,
is_symmetry=False,
is_per_channel=False,
is_pot_scale=False),
)))
1 change: 1 addition & 0 deletions configs/quantization/ptq/demo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
_base_ = ['mmcls::resnet/resnet18_8xb32_in1k.py']
1 change: 1 addition & 0 deletions configs/quantization/qat/demo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
_base_ = ['./lsq_resnet50_8xb16_cifar10.py']
37 changes: 37 additions & 0 deletions configs/quantization/qat/lsq_resnet50_8xb16_cifar10.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
_base_ = ['mmcls::resnet/resnet18_8xb16_cifar10.py']

train_cfg = dict(
_delete_=True,
type='mmrazor.QATEpochBasedLoop',
max_epochs=_base_.train_cfg.max_epochs,
)

model = dict(
_delete_=True,
_scope_='mmrazor',
type='GeneralQuant',
architecture={{_base_.model}},
quantizer=dict(
type='TensorRTQuantizer',
skipped_methods=[
'mmcls.models.heads.ClsHead._get_loss',
'mmcls.models.heads.ClsHead._get_predictions'
],
qconfig=dict(
qtype='affine',
w_observer=dict(type='mmrazor.MinMaxObserver'),
a_observer=dict(type='mmrazor.EMAMinMaxObserver'),
w_fake_quant=dict(type='mmrazor.LearnableFakeQuantize'),
a_fake_quant=dict(type='mmrazor.LearnableFakeQuantize'),
w_qscheme=dict(
bit=2,
is_symmetry=False,
is_per_channel=True,
is_pot_scale=False,
),
a_qscheme=dict(
bit=4,
is_symmetry=False,
is_per_channel=False,
is_pot_scale=False),
)))
4 changes: 4 additions & 0 deletions docs/en/advanced_guides/tutorials/how_to_prune_your_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,11 @@ model = dict(
**Specific arguments**:
A algorithm may have its specific arguments. You need to read their documents to know how to config. Here, we only introduce the specific arguments of ItePruneAlgorithm.

<<<<<<< HEAD
- target_pruning_ratio: target_pruning_ratio is a dict that uses the name of units as keys and the choice values as values.. It indicates how many channels remain after pruning. You can use python ./tools/pruning/get_channel_units.py --choice {config_file} to get the choice template. Please refer to [How to Use our Config Tool for Pruning](./how_to_use_config_tool_of_pruning.md).
=======
- target_pruning_ratio: target_pruning_ratio is a dict that uses the name of units as keys and the choice values as values.. It indicates how many channels remain after pruning. You can use python ./tools/get_channel_units.py --choice {config_file} to get the choice template. Please refer to [How to Use our Config Tool for Pruning](./how_to_use_config_tool_of_pruning.md).
>>>>>>> c6637be ([FEATURE] add quant algo `Learned Step Size Quantization` (#346))
- step_epoch: the step between two pruning operations.
- prune_times: the times to prune to reach the pruning target. Here, we prune resnet34 once, so we set it to 1.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,14 @@ mutator = ChannelMutator(
units={},
),
parse_cfg=dict(
<<<<<<< HEAD
type='ChannelAnalyzer',
demo_input=(1, 3, 224, 224),
tracer_type='BackwardTracer'))
=======
type='BackwardTracer',
loss_calculator=dict(type='ImageClassifierPseudoLoss')))
>>>>>>> c6637be ([FEATURE] add quant algo `Learned Step Size Quantization` (#346))
# init the ChannelMutator object with a model
mutator.prepare_from_supernet(model)
config=mutator.config_template(with_unit_init_args=True)
Expand All @@ -102,9 +107,16 @@ print(config)
# }
# },
# 'parse_cfg': {
<<<<<<< HEAD
# type='ChannelAnalyzer',
# demo_input=(1, 3, 224, 224),
# tracer_type='BackwardTracer'
=======
# 'type': 'BackwardTracer',
# 'loss_calculator': {
# 'type': 'ImageClassifierPseudoLoss'
# }
>>>>>>> c6637be ([FEATURE] add quant algo `Learned Step Size Quantization` (#346))
# }
# }
```
Expand All @@ -121,9 +133,15 @@ mutator2.prepare_from_supernet(resnet34())
To make your development more fluent, we provide a command tool to parse a model and return the config template.

```shell
<<<<<<< HEAD
$ python ./tools/pruning/get_channel_units.py -h

usage: pruning/get_channel_units.py [-h] [-c] [-i] [--choice] [-o OUTPUT_PATH] config
=======
$ python ./tools/get_channel_units.py -h

usage: get_channel_units.py [-h] [-c] [-i] [--choice] [-o OUTPUT_PATH] config
>>>>>>> c6637be ([FEATURE] add quant algo `Learned Step Size Quantization` (#346))

Get channel unit of a model.

Expand All @@ -142,7 +160,11 @@ optional arguments:
Take the algorithm Slimmable Network as an example.

```shell
<<<<<<< HEAD
python ./tools/pruning/get_channel_units.py ./configs/pruning/mmcls/autoslim/autoslim_mbv2_1.5x_slimmable_subnet_8xb256_in1k.py
=======
python ./tools/get_channel_units.py ./configs/pruning/mmcls/autoslim/autoslim_mbv2_1.5x_slimmable_subnet_8xb256_in1k.py
>>>>>>> c6637be ([FEATURE] add quant algo `Learned Step Size Quantization` (#346))

# {
# "type":"SlimmableChannelMutator",
Expand All @@ -160,9 +182,15 @@ python ./tools/pruning/get_channel_units.py ./configs/pruning/mmcls/autoslim/aut
# }
# },
# "parse_cfg":{
<<<<<<< HEAD
# type='ChannelAnalyzer',
# demo_input=(1, 3, 224, 224),
# tracer_type='BackwardTracer'
=======
# "type":"BackwardTracer",
# "loss_calculator":{
# "type":"ImageClassifierPseudoLoss"
>>>>>>> c6637be ([FEATURE] add quant algo `Learned Step Size Quantization` (#346))
# }
# }
# }
Expand All @@ -171,7 +199,11 @@ python ./tools/pruning/get_channel_units.py ./configs/pruning/mmcls/autoslim/aut
The '-i' flag will return the config with the initialization arguments.

```shell
<<<<<<< HEAD
python ./tools/pruning/get_channel_units.py -i ./configs/pruning/mmcls/autoslim/autoslim_mbv2_1.5x_slimmable_subnet_8xb256_in1k.py
=======
python ./tools/get_channel_units.py -i ./configs/pruning/mmcls/autoslim/autoslim_mbv2_1.5x_slimmable_subnet_8xb256_in1k.py
>>>>>>> c6637be ([FEATURE] add quant algo `Learned Step Size Quantization` (#346))

# {
# "type":"SlimmableChannelMutator",
Expand All @@ -196,9 +228,15 @@ python ./tools/pruning/get_channel_units.py -i ./configs/pruning/mmcls/autoslim/
# }
# },
# "parse_cfg":{
<<<<<<< HEAD
# type='ChannelAnalyzer',
# demo_input=(1, 3, 224, 224),
# tracer_type='BackwardTracer'
=======
# "type":"BackwardTracer",
# "loss_calculator":{
# "type":"ImageClassifierPseudoLoss"
>>>>>>> c6637be ([FEATURE] add quant algo `Learned Step Size Quantization` (#346))
# }
# }
# }
Expand All @@ -207,7 +245,11 @@ python ./tools/pruning/get_channel_units.py -i ./configs/pruning/mmcls/autoslim/
With "--choice" flag, it will return the choice template, a dict which uses unit_name as key, and use the choice value as value.

```shell
<<<<<<< HEAD
python ./tools/pruning/get_channel_units.py -i ./configs/pruning/mmcls/autoslim/autoslim_mbv2_1.5x_slimmable_subnet_8xb256_in1k.py --choice
=======
python ./tools/get_channel_units.py -i ./configs/pruning/mmcls/autoslim/autoslim_mbv2_1.5x_slimmable_subnet_8xb256_in1k.py --choice
>>>>>>> c6637be ([FEATURE] add quant algo `Learned Step Size Quantization` (#346))

# {
# "backbone.conv1.conv_(0, 48)_48":32,
Expand Down
3 changes: 3 additions & 0 deletions docs/en/user_guides/pruning_user_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,5 +146,8 @@ Please refer to the following documents for more details.
- [MutableChannel](../../../mmrazor/models/mutables/mutable_channel/MutableChannel.md)
- [ChannelMutator](../../../mmrazor/models/mutables/mutable_channel/units/mutable_channel_unit.ipynb)
- [MutableChannelUnit](../../../mmrazor/models/mutators/channel_mutator/channel_mutator.ipynb)
<<<<<<< HEAD
- Demos
- [Config pruning](../../../demo/config_pruning.ipynb)
=======
>>>>>>> c6637be ([FEATURE] add quant algo `Learned Step Size Quantization` (#346))
4 changes: 2 additions & 2 deletions mmrazor/engine/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@
DartsIterBasedTrainLoop, EvolutionSearchLoop,
GreedySamplerTrainLoop, SelfDistillValLoop,
SingleTeacherDistillValLoop, SlimmableValLoop,
SubnetValLoop)
SubnetValLoop, PTQLoop, QATEpochBasedLoop)

__all__ = [
'SeparateOptimWrapperConstructor', 'DumpSubnetHook',
'SingleTeacherDistillValLoop', 'DartsEpochBasedTrainLoop',
'DartsIterBasedTrainLoop', 'SlimmableValLoop', 'EvolutionSearchLoop',
'GreedySamplerTrainLoop', 'EstimateResourcesHook', 'SelfDistillValLoop',
'AutoSlimGreedySearchLoop', 'SubnetValLoop'
'AutoSlimGreedySearchLoop', 'SubnetValLoop', 'PTQLoop', 'QATEpochBasedLoop'
]
4 changes: 3 additions & 1 deletion mmrazor/engine/runner/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,12 @@
from .slimmable_val_loop import SlimmableValLoop
from .subnet_sampler_loop import GreedySamplerTrainLoop
from .subnet_val_loop import SubnetValLoop
from .quantization_loops import PTQLoop, QATEpochBasedLoop

__all__ = [
'SingleTeacherDistillValLoop', 'DartsEpochBasedTrainLoop',
'DartsIterBasedTrainLoop', 'SlimmableValLoop', 'EvolutionSearchLoop',
'GreedySamplerTrainLoop', 'SubnetValLoop', 'SelfDistillValLoop',
'ItePruneValLoop', 'AutoSlimGreedySearchLoop'
'ItePruneValLoop', 'AutoSlimGreedySearchLoop', 'PTQLoop',
'QATEpochBasedLoop'
]
Loading

0 comments on commit ffb8247

Please sign in to comment.