Skip to content

Commit

Permalink
Merge dev-1.x into quantize (#430)
Browse files Browse the repository at this point in the history
* Fix a bug in make_divisible. (#333)

fix bug in make_divisible

Co-authored-by: liukai <liukai@pjlab.org.cn>

* [Fix] Fix counter mapping bug (#331)

* fix counter mapping bug

* move judgment into get_counter_type & update UT

* [Docs]Add MMYOLO projects link (#334)

* [Doc] fix typos in en/usr_guides (#299)

* Update README.md

* Update README_zh-CN.md

Co-authored-by: Sheffield <49406546+SheffieldCao@users.noreply.github.com>

* [Features]Support `MethodInputsRecorder` and `FunctionInputsRecorder` (#320)

* support MethodInputsRecorder and FunctionInputsRecorder

* fix bugs that the model can not be pickled

* WIP: add pytest for ema model

* fix bugs in recorder and delivery when ema_hook is used

* don't register the DummyDataset

* fix pytest

* [Feature] Add deit-base (#332)

* WIP: support deit

* WIP: add deithead

* WIP: fix checkpoint hook

* fix data preprocessor

* fix cfg

* WIP: add readme

* reset single_teacher_distill

* add metafile

* add model to model-index

* fix configs and readme

* [Feature]Feature map visualization (#293)

* WIP: vis

* WIP: add visualization

* WIP: add visualization hook

* WIP: support razor visualizer

* WIP

* WIP: wrap draw_featmap

* support feature map visualization

* add a demo image for visualization

* fix typos

* change eps to 1e-6

* add pytest for visualization

* fix vis hook

* fix arguments' name

* fix img path

* support draw inference results

* add visualization doc

* fix figure url

* move files

Co-authored-by: weihan cao <HIT-cwh>

* [Feature] Add kd examples (#305)

* support kd for mbv2 and shufflenetv2

* WIP: fix ckpt path

* WIP: fix kd r34-r18

* add metafile

* fix metafile

* delete

* [Doc] add documents about pruning. (#313)

* init

* update user guide

* update images

* update

* update How to prune your model

* update how_to_use_config_tool_of_pruning.md

* update doc

* move location

* update

* update

* update

* add mutablechannels.md

* add references

Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: jacky <jacky@xx.com>

* [Feature] PyTorch version of `PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient`. (#304)

* add pkd

* add pytest for pkd

* fix cfg

* WIP: support fcos3d

* WIP: support fcos3d pkd

* support mmdet3d

* fix cfgs

* change eps to 1e-6 and add some comments

* fix docstring

* fix cfg

* add assert

* add type hint

* WIP: add readme and metafile

* fix readme

* update metafiles and readme

* fix metafile

* fix pipeline figure

* [Refactor] Refactor Mutables and Mutators (#324)

* refactor mutables

* update load fix subnet

* add DumpChosen Typehint

* adapt UTs

* fix lint

* Add GroupMixin to ChannelMutator (temporarily)

* fix type hints

* add GroupMixin doc-string

* modified by comments

* fix type hits

* update subnet format

* fix channel group bugs and add UTs

* fix doc string

* fix comments

* refactor diff module forward

* fix error in channel mutator doc

* fix comments

Co-authored-by: liukai <liukai@pjlab.org.cn>

* [Fix] Update readme (#341)

* update kl readme

* update dsnas readme

* fix url

* Bump version to 1.0.0rc1 (#338)

update version

* [Feature] Add Autoformer algorithm (#315)

* update candidates

* update subnet_sampler_loop

* update candidate

* add readme

* rename variable

* rename variable

* clean

* update

* add doc string

* Revert "[Improvement] Support for candidate multiple dimensional search constraints."

* [Improvement] Update Candidate with multi-dim search constraints. (#322)

* update doc

* add support type

* clean code

* update candidates

* clean

* xx

* set_resource -> set_score

* fix ci bug

* py36 lint

* fix bug

* fix check constrain

* py36 ci

* redesign candidate

* fix pre-commit

* update cfg

* add build_resource_estimator

* fix ci bug

* remove runner.epoch in testcase

* [Feature] Autoformer architecture and dynamicOPs (#327)

* add DynamicSequential

* dynamiclayernorm

* add dynamic_pathchembed

* add DynamicMultiheadAttention and DynamicRelativePosition2D

* add channel-level dynamicOP

* add autoformer algo

* clean notes

* adapt channel_mutator

* vit fly

* fix import

* mutable init

* remove annotation

* add DynamicInputResizer

* add unittest for mutables

* add OneShotMutableChannelUnit_VIT

* clean code

* reset unit for vit

* remove attr

* add autoformer backbone UT

* add valuemutator UT

* clean code

* add autoformer algo UT

* update classifier UT

* fix test error

* ignore

* make lint

* update

* fix lint

* mutable_attrs

* fix test

* fix error

* remove DynamicInputResizer

* fix test ci

* remove InputResizer

* rename variables

* modify type

* Continued improvements of ChannelUnit

* fix lint

* fix lint

* remove OneShotMutableChannelUnit

* adjust derived type

* combination mixins

* clean code

* fix sample subnet

* search loop fly

* more annotations

* avoid counter warning and modify batch_augment cfg by gy

* restore

* source_value_mutables restriction

* simply arch_setting api

* update

* clean

* fix ut

* [Feature] Add performance predictor (#306)

* add predictor with 4 handlers

* [Improvement] Update Candidate with multi-dim search constraints. (#322)

* update doc

* add support type

* clean code

* update candidates

* clean

* xx

* set_resource -> set_score

* fix ci bug

* py36 lint

* fix bug

* fix check constrain

* py36 ci

* redesign candidate

* fix pre-commit

* update cfg

* add build_resource_estimator

* fix ci bug

* remove runner.epoch in testcase

* update metric_predictor:
1. update MetricPredictor;
2. add predictor config for searching;
3. add predictor in evolution_search_loop.

* add UT for predictor

* add MLPHandler

* patch optional.txt for predictors

* patch test_evolution_search_loop

* refactor apis of predictor and handlers

* fix ut and remove predictor_cfg in predictor

* adapt new mutable & mutator design

* fix ut

* remove unness assert after rebase

* move predictor-build in __init__ & simplify estimator-build

Co-authored-by: Yue Sun <aptsunny@tongji.edu.cn>

* [Feature] Add DCFF (#295)

* add ChannelGroup (#250)

* rebase new dev-1.x

* modification for adding config_template

* add docstring to channel_group.py

* add docstring to mutable_channel_group.py

* rm channel_group_cfg from Graph2ChannelGroups

* change choice type of SequentialChannelGroup from float to int

* add a warning about group-wise conv

* restore __init__ of dynamic op

* in_channel_mutable  ->  mutable_in_channel

* rm abstractproperty

* add a comment about VT

* rm registry for ChannelGroup

* MUTABLECHANNELGROUP -> ChannelGroupType

* refine docstring of IndexDict

* update docstring

* update docstring

* is_prunable -> is_mutable

* update docstring

* fix error in pre-commit

* update unittest

* add return type

* unify init_xxx apit

* add unitest about init of MutableChannelGroup

* update according to reviews

* sequential_channel_group -> sequential_mutable_channel_group

Co-authored-by: liukai <liukai@pjlab.org.cn>

* Add BaseChannelMutator and refactor Autoslim (#289)

* add BaseChannelMutator

* add autoslim

* tmp

* make SequentialMutableChannelGroup accpeted both of num and ratio  as choice. and supports divisior

* update OneShotMutableChannelGroup

* pass supernet training of autoslim

* refine autoslim

* fix bug in OneShotMutableChannelGroup

* refactor make_divisible

* fix spell error:  channl -> channel

* init_using_backward_tracer -> init_from_backward_tracer
init_from_fx_tracer -> init_from_fx_tracer

* refine SequentialMutableChannelGroup

* let mutator support models with dynamicop

* support define search space in model

* tracer_cfg -> parse_cfg

* refine

* using -> from

* update docstring

* update docstring

Co-authored-by: liukai <liukai@pjlab.org.cn>

* tmpsave

* migrate ut

* tmpsave2

* add loss collector

* refactor slimmable and add l1-norm (#291)

* refactor slimmable and add l1-norm

* make l1-norm support convnd

* update get_channel_groups

* add  l1-norm_resnet34_8xb32_in1k.py

* add pretrained to resnet34-l1

* remove old channel mutator

* BaseChannelMutator -> ChannelMutator

* update according to reviews

* add readme to l1-norm

* MBV2_slimmable -> MBV2_slimmable_config

Co-authored-by: liukai <liukai@pjlab.org.cn>

* update config

* fix md & pytorch support <1.9.0 in batchnorm init

* Clean old codes. (#296)

* remove old dynamic ops

* move dynamic ops

* clean old mutable_channels

* rm OneShotMutableChannel

* rm MutableChannel

* refine

* refine

* use SquentialMutableChannel to replace OneshotMutableChannel

* refactor dynamicops folder

* let SquentialMutableChannel support float

Co-authored-by: liukai <liukai@pjlab.org.cn>

* fix ci

* ci fix py3.6.x & add mmpose

* ci fix py3.6.9 in utils/index_dict.py

* fix mmpose

* minimum_version_cpu=3.7

* fix ci 3.7.13

* fix pruning &meta ci

* support python3.6.9

* fix py3.6 import caused by circular import patch in py3.7

* fix py3.6.9

* Add channel-flow (#301)

* base_channel_mutator -> channel_mutator

* init

* update docstring

* allow omitting redundant configs for channel

* add register_mutable_channel_to_a_module to MutableChannelContainer

* update according to reviews 1

* update according to reviews 2

* update according to reviews 3

* remove old docstring

* fix error

* using->from

* update according to reviews

* support self-define input channel number

* update docstring

* chanenl -> channel_elem

Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: jacky <jacky@xx.com>

* support >=3.7

* support py3.6.9

* Rename: ChannelGroup -> ChannelUnit (#302)

* refine repr of MutableChannelGroup

* rename folder name

* ChannelGroup -> ChannelUnit

* filename in units folder

* channel_group -> channel_unit

* groups -> units

* group -> unit

* update

* get_mutable_channel_groups -> get_mutable_channel_units

* fix bug

* refine docstring

* fix ci

* fix bug in tracer

Co-authored-by: liukai <liukai@pjlab.org.cn>

* update new channel config format

* update pruning refactor

* update merged pruning

* update commit

* fix dynamic_conv_mixin

* update comments: readme&dynamic_conv_mixins.py

* update readme

* move kl softmax channel pooling to op by comments

* fix comments: fix redundant & split README.md

* dcff in ItePruneAlgorithm

* partial dynamic params for fuseconv

* add step_freq & prune_time check

* update comments

* update comments

* update comments

* fix ut

* fix gpu ut & revise step_freq in ItePruneAlgorithm

* update readme

* revise ItePruneAlgorithm

* fix docs

* fix dynamic_conv attr

* fix ci

Co-authored-by: LKJacky <108643365+LKJacky@users.noreply.github.com>
Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: zengyi.vendor <zengyi.vendor@sensetime.com>
Co-authored-by: jacky <jacky@xx.com>

* [Fix] Fix optional requirements (#357)

* fix optional requirements

* fix dcff ut

* fix import with get_placeholder

* supplement the previous commit

* [Fix] Fix configs of wrn models and ofd. (#361)

* 1.revise the configs of wrn22, wrn24, and wrn40. 2.revise the data_preprocessor of ofd_backbone_resnet50_resnet18_8xb16_cifar10

* 1.Add README for vanilla-wrm.

* 1.Revise readme of wrn

Co-authored-by: zhangzhongyu <zhangzhongyu@pjlab.org.cn>

* [Fix] Fix bug on mmrazor visualization, mismatch argument in define and use. (#356)

fix bug on mmrazor visualization, mismatch argument in define and use.

Co-authored-by: Xianpan Zhou <32625100+PanDaMeow@users.noreply.github.com>

* fix bug in benchmark_test (#364)

fix bug in configs

Co-authored-by: Your Name <you@example.com>

* [FIX] Fix wrn configs (#368)

* fix wrn configs

* fix wrn configs

* update online wrn model weight

* [Fix] fix bug on pkd config. Wrong import filename. (#373)

* [CI] Update ci to torch1.13 (#380)

update ci to torch1.13

* [Feature] Add BigNAS algorithm (#219)

* add calibrate-bn-statistics

* add test calibrate-bn-statistics

* fix mixins

* fix mixins

* fix mixin tests

* remove slimmable channel mutable and refactor dynamic op

* refact dynamic batch norm

* add progressive dynamic conv2d

* add center crop dynamic conv2d

* refactor dynamic directory

* refactor dynamic sequential

* rename length to depth in dynamic sequential

* add test for derived mutable

* refactor dynamic op

* refactor api of dynamic op

* add derive mutable mixin

* addbignas algorithm

* refactor bignas structure

* add input resizer

* add input resizer to bignas

* move input resizer from algorithm into classifier

* remove compnents

* add attentive mobilenet

* delete json file

* nearly(less 0.2) align inference accuracy with gml

* move mutate seperated in bignas mobilenet backbone

* add zero_init_residual

* add set_dropout

* set dropout in bignas algorithm

* fix registry

* add subnet yaml and nearly align inference accuracy with gml

* add rsb config for bignas

* remove base in config

* add gml bignas config

* convert to iter based

* bignas forward and backward fly

* fix merge conflict

* fix dynamicseq bug

* fix bug and refactor bignas

* arrange configs of bignas

* fix typo

* refactor attentive_mobilenet

* fix channel mismatch due to registion of DerivedMutable

* update bignas & fix se channel mismatch

* add AutoAugmentV2 & remove unness configs

* fix lint

* recover channel assertion in channel unit

* fix a group bug

* fix comments

* add docstring

* add norm in dynamic_embed

* fix search loop & other minor changes

* fix se expansion

* minor change

* add ut for bignas & attentive_mobilenet

* fix ut

* update bignas readme

* rm unness ut & supplement get_placeholder

* fix lint

* fix ut

* add subnet deployment in downstream tasks.

* minor change

* update ofa backbone

* minor fix

* Continued improvements of searchable backbone

* minor change

* drop ratio in backbone

* fix comments

* fix ci test

* fix test

* add dynamic shortcut UT

* modify strategy to fit bignas

* fix test

* fix bug in neck

* fix error

* fix error

* fix yaml

* save subnet ckpt

* merge autoslim_val/test_loop into subnet_val_loop

* move calibrate_bn_mixin to utils

* fix bugs and add docstring

* clean code

* fix register bug

* clean code

* update

Co-authored-by: wangshiguang <wangshiguang@sensetime.com>
Co-authored-by: gaoyang07 <1546308416@qq.com>
Co-authored-by: aptsunny <aptsunny@tongji.edu.cn>
Co-authored-by: sunyue1 <sunyue1@sensetime.com>

* [Bug] Fix ckpt (#372)

fix ckpt

* [Feature] Add tools to convert distill ckpt to student-only ckpt. (#381)

* [Feature] Add tools to convert distill ckpt to student-only ckpt.

* fix bug.

* add --model-only to only save model.

* Make changes accroding to PR review.

* Enhance the Abilities of the Tracer for Pruning. (#371)

* tmp

* add new mmdet models

* add docstring

* pass test and pre-commit

* rm razor tracer

* update fx tracer, now it can automatically wrap methods and functions.

* update tracer passed models

* add warning for torch <1.12.0

fix bug for python3.6

update placeholder to support placeholder.XXX

* fix bug

* update docs

* fix lint

* fix parse_cfg in configs

* restore mutablechannel

* test ite prune algorithm when using dist

* add get_model_from_path to MMModelLibrrary

* add mm models to DefaultModelLibrary

* add uts

* fix bug

* fix bug

* add uts

* add uts

* add uts

* add uts

* fix bug

* restore ite_prune_algorithm

* update doc

* PruneTracer -> ChannelAnalyzer

* prune_tracer -> channel_analyzer

* add test for fxtracer

* fix bug

* fix bug

* PruneTracer -> ChannelAnalyzer

refine

* CustomFxTracer -> MMFxTracer

* fix bug when test with torch<1.12

* update print log

* fix lint

* rm unuseful code

Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: jacky <jacky@xx.com>
Co-authored-by: Your Name <you@example.com>
Co-authored-by: liukai <your_email@abc.example>

* fix bug in placer holder (#395)

* fix bug in placer holder

* remove redundent comment

Co-authored-by: liukai <your_email@abc.example>

* Add get_prune_config and a demo config_pruning (#389)

* update tools and test

* add demo

* disable test doc

* add switch for test tools and test_doc

* fix bug

* update doc

* update tools name

* mv get_channel_units

Co-authored-by: liukai <your_email@abc.example>

* [Improvement] Adapt OFA series with SearchableMobileNetV3 (#385)

* fix mutable bug in AttentiveMobileNetV3

* remove unness code

* update ATTENTIVE_SUBNET_A0-A6.yaml with optimized names

* unify the sampling usage in sandwich_rule-based NAS

* use alias to export subnet

* update OFA configs

* fix attr bug

* fix comments

* update convert_supernet2subnet.py

* correct the way to dump DerivedMutable

* fix convert index bug

* update OFA configs & models

* fix dynamic2static

* generalize convert_ofa_ckpt.py

* update input_resizer

* update README.md

* fix ut

* update export_fix_subnet

* update _dynamic_to_static

* update fix_subnet UT & minor fix bugs

* fix ut

* add new autoaug compared to attentivenas

* clean

* fix act

* fix act_cfg

* update fix_subnet

* fix lint

* add docstring

Co-authored-by: gaoyang07 <1546308416@qq.com>
Co-authored-by: aptsunny <aptsunny@tongji.edu.cn>

* [Fix]Dcff Deploy Revision (#383)

* dcff deploy revision

* tempsave

* update fix_subnet

* update mutator load

* export/load_fix_subnet revision for mutator

* update fix_subnet with dev-1.x

* update comments

* update docs

* update registry

* [Fix] Fix commands in README to adapt branch 1.x (#400)

* update commands in README for 1.x

* fix commands

Co-authored-by: gaoyang07 <1546308416@qq.com>

* Set requires_grad to False if the teacher is not trainable (#398)

* add choice and mask of units to checkpoint (#397)

* add choice and mask of units to checkpoint

* update

* fix bug

* remove device operation

* fix bug

* fix circle ci error

* fix error in numpy for circle ci

* fix bug in requirements

* restore

* add a note

* a new solution

* save mutable_channel.mask as float for dist training

* refine

* mv meta file test

Co-authored-by: liukai <your_email@abc.example>
Co-authored-by: jacky <jacky@xx.com>

* [Bug]Fix fpn teacher distill (#388)

fix fpn distill

* [CodeCamp #122] Support KD algorithm MGD for detection. (#377)

* [Feature] Support KD algorithm MGD for detection.

* use connector to beauty mgd.

* fix typo, add unitest.

* fix mgd loss unitest.

* fix mgd connector unitest.

* add model pth and log file.

* add mAP.

* update l1 config (#405)

* add l1 config

* update l1 config

Co-authored-by: jacky <jacky@xx.com>

* [Feature] Add greedy search for AutoSlim (#336)

* WIP: add greedysearch

* fix greedy search and add bn_training_mode to autoslim

* fix cfg files

* fix autoslim configs

* fix bugs when converting dynamic bn to static bn

* change to test loop

* refactor greedy search

* rebase and fix greedysearch

* fix lint

* fix and delete useless codes

* fix pytest

* fix pytest and add bn_training_mode

* fix lint

* add reference to AutoSlimGreedySearchLoop's docstring

* sort candidate_choices

* fix save subnet

* delete useless codes in channel container

* change files' name: convert greedy_search_loop to autoslim_greedy_search_loop

* [Fix] Fix metafile (#422)

* fix ckpt path in metafile and readme

* fix darts file path

* fix docstring in ConfigurableDistiller

* fix darts

* fix error

* add darts of mmrazor version

* delete py36

Co-authored-by: liukai <your_email@abc.example>

* update bignas cfg (#412)

* check attentivenas training

* update ckpt link

* update supernet log

Co-authored-by: aptsunny <aptsunny@tongji.edu.cn>

* Bump version to 1.0.0rc2 (#423)

bump version to 1.0.0rc2

Co-authored-by: liukai <your_email@abc.example>

* fix lint

* fix ci

* add tmp docstring for passed ci

* add tmp docstring for passed ci

* fix ci

* add get_placeholder for quant

* add skip for unittest

* fix package placeholder bug

* add version judgement in __init__

* update prev commit

* update prev commit

* update prev commit

* update prev commit

* update prev commit

* update prev commit

* update prev commit

* update prev commit

* update prev commit

Co-authored-by: LKJacky <108643365+LKJacky@users.noreply.github.com>
Co-authored-by: liukai <liukai@pjlab.org.cn>
Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com>
Co-authored-by: kitecats <90194592+kitecats@users.noreply.github.com>
Co-authored-by: Sheffield <49406546+SheffieldCao@users.noreply.github.com>
Co-authored-by: whcao <41630003+HIT-cwh@users.noreply.github.com>
Co-authored-by: jacky <jacky@xx.com>
Co-authored-by: pppppM <67539920+pppppM@users.noreply.github.com>
Co-authored-by: Yue Sun <aptsunny@tongji.edu.cn>
Co-authored-by: zengyi <31244134+spynccat@users.noreply.github.com>
Co-authored-by: zengyi.vendor <zengyi.vendor@sensetime.com>
Co-authored-by: zhongyu zhang <43191879+wilxy@users.noreply.github.com>
Co-authored-by: zhangzhongyu <zhangzhongyu@pjlab.org.cn>
Co-authored-by: Xianpan Zhou <32625100+TinyTigerPan@users.noreply.github.com>
Co-authored-by: Xianpan Zhou <32625100+PanDaMeow@users.noreply.github.com>
Co-authored-by: Your Name <you@example.com>
Co-authored-by: P.Huang <37200926+FreakieHuang@users.noreply.github.com>
Co-authored-by: qiufeng <44188071+wutongshenqiu@users.noreply.github.com>
Co-authored-by: wangshiguang <wangshiguang@sensetime.com>
Co-authored-by: gaoyang07 <1546308416@qq.com>
Co-authored-by: sunyue1 <sunyue1@sensetime.com>
Co-authored-by: liukai <your_email@abc.example>
Co-authored-by: Ming-Hsuan-Tu <qrnnis2623891@gmail.com>
Co-authored-by: Yivona <120088893+yivona08@users.noreply.github.com>
Co-authored-by: Yue Sun <aptsunny@alumni.tongji.edu.cn>
  • Loading branch information
26 people committed Jan 12, 2023
1 parent 0480642 commit 985a611
Show file tree
Hide file tree
Showing 347 changed files with 21,492 additions and 2,704 deletions.
5 changes: 3 additions & 2 deletions .circleci/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ jobs:
pip install git+https://github.com/open-mmlab/mmclassification.git@dev-1.x
pip install git+https://github.com/open-mmlab/mmdetection.git@dev-3.x
pip install git+https://github.com/open-mmlab/mmsegmentation.git@dev-1.x
python -m pip install git+ssh://git@github.com/open-mmlab/mmpose.git@dev-1.x
pip install -r requirements.txt
- run:
name: Build and install
Expand Down Expand Up @@ -154,7 +155,7 @@ workflows:
name: minimum_version_cpu
torch: 1.6.0
torchvision: 0.7.0
python: 3.6.9 # The lowest python 3.6.x version available on CircleCI images
python: 3.7.9
requires:
- lint
- build_cpu:
Expand All @@ -163,7 +164,7 @@ workflows:
torchvision: 0.13.1
python: 3.9.0
requires:
- minimum_version_cpu
- lint
- hold:
type: approval
requires:
Expand Down
53 changes: 24 additions & 29 deletions .dev_scripts/benchmark_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,39 +87,34 @@ def replace_to_ceph(cfg):
's3://openmmlab/datasets/classification/imagenet',
}))

def _process_pipeline(dataset, name):

def replace_img(pipeline):
if pipeline['type'] == 'LoadImageFromFile':
pipeline['file_client_args'] = file_client_args

def replace_ann(pipeline):
if pipeline['type'] == 'LoadAnnotations' or pipeline[
'type'] == 'LoadPanopticAnnotations':
pipeline['file_client_args'] = file_client_args

def _process_dataset(dataset):

def replace_pipline(pipelines):
for pipeline in pipelines:
if pipeline['type'] in [
'LoadImageFromFile',
'LoadAnnotations',
'LoadPanopticAnnotations',
]:
pipeline['file_client_args'] = file_client_args

if dataset['type'] in ['CityscapesDataset']:
dataset['file_client_args'] = file_client_args
if 'pipeline' in dataset:
replace_img(dataset.pipeline[0])
replace_ann(dataset.pipeline[1])
if 'dataset' in dataset:
# dataset wrapper
replace_img(dataset.dataset.pipeline[0])
replace_ann(dataset.dataset.pipeline[1])
else:
# dataset wrapper
replace_img(dataset.dataset.pipeline[0])
replace_ann(dataset.dataset.pipeline[1])

def _process_evaluator(evaluator, name):
replace_pipline(dataset['pipeline'])
if 'dataset' in dataset:
_process_dataset(dataset['dataset'])

def _process_evaluator(evaluator):
if evaluator['type'] == 'CocoPanopticMetric':
evaluator['file_client_args'] = file_client_args

# half ceph
_process_pipeline(cfg.train_dataloader.dataset, cfg.filename)
_process_pipeline(cfg.val_dataloader.dataset, cfg.filename)
_process_pipeline(cfg.test_dataloader.dataset, cfg.filename)
_process_evaluator(cfg.val_evaluator, cfg.filename)
_process_evaluator(cfg.test_evaluator, cfg.filename)
_process_dataset(cfg.train_dataloader.dataset, )
_process_dataset(cfg.val_dataloader.dataset)
_process_dataset(cfg.test_dataloader.dataset)
_process_evaluator(cfg.val_evaluator)
_process_evaluator(cfg.test_evaluator)


def create_test_job_batch(commands, model_info, args, port):
Expand Down Expand Up @@ -169,7 +164,7 @@ def create_test_job_batch(commands, model_info, args, port):

launcher = 'none' if args.local else 'slurm'
runner = 'python' if args.local else 'srun python'
master_port = f'NASTER_PORT={port}'
master_port = f'MASTER_PORT={port}'

script_name = osp.join('tools', 'test.py')
job_script = (f'#!/bin/bash\n'
Expand Down
7 changes: 6 additions & 1 deletion tests/test_metafiles.py → .dev_scripts/meta_files_test.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved.
import os
import unittest
from pathlib import Path

import requests
Expand All @@ -8,7 +9,7 @@
MMRAZOR_ROOT = Path(__file__).absolute().parents[1]


class TestMetafiles:
class TestMetafiles(unittest.TestCase):

def get_metafiles(self, code_path):
"""
Expand Down Expand Up @@ -51,3 +52,7 @@ def test_metafiles(self):
assert model['Name'] == correct_name, \
f'name error in {metafile}, correct name should ' \
f'be {correct_name}'


if __name__ == '__main__':
unittest.main()
47 changes: 46 additions & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,15 +29,60 @@ jobs:
strategy:
matrix:
python-version: [3.7]
torch: [1.12.0]
torch: [1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.13.0]
include:
- torch: 1.6.0
torch_version: 1.6
torchvision: 0.7.0
- torch: 1.7.0
torch_version: 1.7
torchvision: 0.8.1
- torch: 1.7.0
torch_version: 1.7
torchvision: 0.8.1
python-version: 3.8
- torch: 1.8.0
torch_version: 1.8
torchvision: 0.9.0
- torch: 1.8.0
torch_version: 1.8
torchvision: 0.9.0
python-version: 3.8
- torch: 1.9.0
torch_version: 1.9
torchvision: 0.10.0
- torch: 1.9.0
torch_version: 1.9
torchvision: 0.10.0
python-version: 3.8
- torch: 1.10.0
torch_version: 1.10
torchvision: 0.11.0
- torch: 1.10.0
torch_version: 1.10
torchvision: 0.11.0
python-version: 3.8
- torch: 1.11.0
torch_version: 1.11
torchvision: 0.12.0
- torch: 1.11.0
torch_version: 1.11
torchvision: 0.12.0
python-version: 3.8
- torch: 1.12.0
torch_version: 1.12
torchvision: 0.13.0
- torch: 1.12.0
torch_version: 1.12
torchvision: 0.13.0
python-version: 3.8
- torch: 1.13.0
torch_version: 1.13
torchvision: 0.14.0
- torch: 1.13.0
torch_version: 1.13
torchvision: 0.14.0
python-version: 3.8

steps:
- uses: actions/checkout@v2
Expand Down
49 changes: 49 additions & 0 deletions configs/_base_/nas_backbones/attentive_mobilenetv3_supernet.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# search space
arch_setting = dict(
kernel_size=[ # [min_kernel_size, max_kernel_size, step]
[3, 5, 2],
[3, 5, 2],
[3, 5, 2],
[3, 5, 2],
[3, 5, 2],
[3, 5, 2],
[3, 5, 2],
],
num_blocks=[ # [min_num_blocks, max_num_blocks, step]
[1, 2, 1],
[3, 5, 1],
[3, 6, 1],
[3, 6, 1],
[3, 8, 1],
[3, 8, 1],
[1, 2, 1],
],
expand_ratio=[ # [min_expand_ratio, max_expand_ratio, step]
[1, 1, 1],
[4, 6, 1],
[4, 6, 1],
[4, 6, 1],
[4, 6, 1],
[6, 6, 1],
[6, 6, 1],
[6, 6, 1], # last layer
],
num_out_channels=[ # [min_channel, max_channel, step]
[16, 24, 8], # first layer
[16, 24, 8],
[24, 32, 8],
[32, 40, 8],
[64, 72, 8],
[112, 128, 8],
[192, 216, 8],
[216, 224, 8],
[1792, 1984, 1984 - 1792], # last layer
])

input_resizer_cfg = dict(
input_sizes=[[192, 192], [224, 224], [256, 256], [288, 288]])

nas_backbone = dict(
type='AttentiveMobileNetV3',
arch_setting=arch_setting,
norm_cfg=dict(type='DynamicBatchNorm2d', momentum=0.0))
57 changes: 57 additions & 0 deletions configs/_base_/nas_backbones/ofa_mobilenetv3_supernet.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# search space
arch_setting = dict(
kernel_size=[ # [min_kernel_size, max_kernel_size, step]
[3, 3, 1],
[3, 7, 2],
[3, 7, 2],
[3, 7, 2],
[3, 7, 2],
[3, 7, 2],
],
num_blocks=[ # [min_num_blocks, max_num_blocks, step]
[1, 1, 1],
[2, 4, 1],
[2, 4, 1],
[2, 4, 1],
[2, 4, 1],
[2, 4, 1],
],
expand_ratio=[ # [min_expand_ratio, max_expand_ratio, step]
[1, 1, 1],
[3, 6, 1],
[3, 6, 1],
[3, 6, 1],
[3, 6, 1],
[3, 6, 1],
[6, 6, 1], # last layer
],
# [16, 16, 24, 40, 80, 112, 160, 960, 1280]
num_out_channels=[ # [min_channel, max_channel, step]
[16, 16, 8], # first layer
[16, 16, 8],
[16, 24, 8],
[24, 40, 8],
[40, 80, 8],
[80, 112, 8],
[112, 160, 8],
[1024, 1280, 1280 - 1024], # last layer
])

input_resizer_cfg = dict(
input_sizes=[[128, 128], [140, 140], [144, 144], [152, 152], [192, 192],
[204, 204], [224, 224], [256, 256]])

nas_backbone = dict(
type='mmrazor.AttentiveMobileNetV3',
arch_setting=arch_setting,
out_indices=(6, ),
stride_list=[1, 2, 2, 2, 1, 2],
with_se_list=[False, False, True, False, True, True],
act_cfg_list=[
'HSwish', 'ReLU', 'ReLU', 'ReLU', 'HSwish', 'HSwish', 'HSwish',
'HSwish', 'HSwish'
],
conv_cfg=dict(type='OFAConv2d'),
norm_cfg=dict(type='mmrazor.DynamicBatchNorm2d', momentum=0.1),
fine_grained_mode=True,
with_attentive_shortcut=False)
4 changes: 2 additions & 2 deletions configs/_base_/nas_backbones/spos_mobilenet_supernet.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@

arch_setting = [
# Parameters to build layers. 3 parameters are needed to construct a
# layer, from left to right: channel, num_blocks, mutable_cfg.
# layer, from left to right: channel, num_blocks, stride, mutable_cfg.
[24, 1, 1, _FIRST_MUTABLE],
[32, 4, 2, _STAGE_MUTABLE],
[56, 4, 2, _STAGE_MUTABLE],
Expand All @@ -58,7 +58,7 @@

nas_backbone = dict(
_scope_='mmrazor',
type='SearchableMobileNet',
type='SearchableMobileNetV2',
first_channels=40,
last_channels=1728,
widen_factor=1.0,
Expand Down
31 changes: 17 additions & 14 deletions configs/_base_/settings/imagenet_bs1024_spos.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,20 +9,23 @@
)

train_pipeline = [
dict(type='mmcls.LoadImageFromFile'),
dict(type='mmcls.RandomResizedCrop', scale=224),
dict(_scope_='mmcls', type='LoadImageFromFile'),
dict(_scope_='mmcls', type='RandomResizedCrop', scale=224),
dict(
type='mmcls.ColorJitter', brightness=0.4, contrast=0.4,
_scope_='mmcls',
type='ColorJitter',
brightness=0.4,
contrast=0.4,
saturation=0.4),
dict(type='mmcls.RandomFlip', prob=0.5, direction='horizontal'),
dict(type='mmcls.PackClsInputs'),
dict(_scope_='mmcls', type='RandomFlip', prob=0.5, direction='horizontal'),
dict(_scope_='mmcls', type='PackClsInputs'),
]

test_pipeline = [
dict(type='mmcls.LoadImageFromFile'),
dict(type='mmcls.ResizeEdge', scale=256, edge='short'),
dict(type='mmcls.CenterCrop', crop_size=224),
dict(type='mmcls.PackClsInputs'),
dict(_scope_='mmcls', type='LoadImageFromFile'),
dict(_scope_='mmcls', type='ResizeEdge', scale=256, edge='short'),
dict(_scope_='mmcls', type='CenterCrop', crop_size=224),
dict(_scope_='mmcls', type='PackClsInputs'),
]

train_dataloader = dict(
Expand All @@ -34,7 +37,7 @@
ann_file='meta/train.txt',
data_prefix='train',
pipeline=train_pipeline),
sampler=dict(type='mmcls.DefaultSampler', shuffle=True),
sampler=dict(type='DefaultSampler', shuffle=True),
persistent_workers=True,
)

Expand All @@ -47,10 +50,10 @@
ann_file='meta/val.txt',
data_prefix='val',
pipeline=test_pipeline),
sampler=dict(type='mmcls.DefaultSampler', shuffle=False),
sampler=dict(type='DefaultSampler', shuffle=False),
persistent_workers=True,
)
val_evaluator = dict(type='mmcls.Accuracy', topk=(1, 5))
val_evaluator = dict(type='Accuracy', topk=(1, 5), _scope_='mmcls')

# If you want standard test, please manually configure the test dataset
test_dataloader = val_dataloader
Expand All @@ -61,13 +64,13 @@
bias_decay_mult=0.0, norm_decay_mult=0.0, dwconv_decay_mult=0.0)

optim_wrapper = dict(
optimizer=dict(type='mmcls.SGD', lr=0.5, momentum=0.9, weight_decay=4e-5),
optimizer=dict(type='SGD', lr=0.5, momentum=0.9, weight_decay=4e-5),
paramwise_cfg=paramwise_cfg,
clip_grad=None)

# leanring policy
param_scheduler = dict(
type='mmcls.PolyLR',
type='PolyLR',
power=1.0,
eta_min=0.0,
by_epoch=True,
Expand Down
Loading

0 comments on commit 985a611

Please sign in to comment.