Skip to content
Code for ICLR 2020 paper 'AtomNAS: Fine-Grained End-to-End Neural Architecture Search'
Python Shell
Branch: master
Clone or download
Latest commit 7e1af8e Jul 15, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
apps V1.0.0 Feb 3, 2020
models V1.0.0 Feb 3, 2020
scripts V1.0.0 Feb 3, 2020
tests V1.0.0 Feb 3, 2020
utils V1.0.0 Feb 3, 2020
.gitignore V1.0.0 Feb 3, 2020
.style.yapf V1.0.0 Feb 3, 2020
LICENSE Add LICENSE Dec 26, 2018
README.md V1.0.0 Feb 3, 2020
common.py V1.0.0 Feb 3, 2020
requirements.txt V1.0.0 Feb 3, 2020
train.py V1.0.0 Feb 3, 2020
val.py V1.0.0 Feb 3, 2020

README.md

AtomNAS: Fine-Grained End-to-End Neural Architecture Search [PDF]

Updates

  • [Feb 2020] Simplify validation process, released the pretrained models. Conflict with previous version.

Overview

This is the codebase (including search) for ICLR 2020 paper AtomNAS: Fine-Grained End-to-End Neural Architecture Search.

Setup

Distributed Training

Set the following ENV variable:

$DATA_ROOT: Path to data root
$METIS_WORKER_0_HOST: IP address of worker 0
$METIS_WORKER_0_PORT: Port used for initializing distributed environment
$METIS_TASK_INDEX: Index of task
$ARNOLD_WORKER_NUM: Number of workers
$ARNOLD_WORKER_GPU: Number of GPUs (NOTE: should exactly match local GPU numbers with `CUDA_VISIBLE_DEVICES `)
$ARNOLD_OUTPUT: Output directory

Non-Distributed Training (Not Recommend)

Set the following ENV variable:

$DATA_ROOT: Path to data root
$ARNOLD_WORKER_GPU: Number of GPUs (NOTE: should exactly match local GPU numbers with `CUDA_VISIBLE_DEVICES `)
$ARNOLD_OUTPUT: Output directory

Reproduce AtomNAS results

For Table 1

  • AtomNAS-A: bash scripts/run.sh apps/slimming/shrink/atomnas_a.yml
  • AtomNAS-B: bash scripts/run.sh apps/slimming/shrink/atomnas_b.yml
  • AtomNAS-C: bash scripts/run.sh apps/slimming/shrink/atomnas_c.yml

If everything is OK, you should get similar results.

Pretrained Models could be downloaded from onedrive

Testing

For AtomNAS:

FILE=$(realpath {{log_dir_path}}) checkpoint=ckpt bash scripts/run.sh apps/eval/eval_shrink.yml

For AtomNAS+:

TRAIN_CONFIG=$(realpath {{train_config_path}}) ATOMNAS_VAL=True bash scripts/run.sh apps/eval/eval_se.yml --pretrained {{ckpt_path}}

Related Info

  1. Requirements

    • See requirements.txt
  2. Environment

    • The code is developed using python 3. NVIDIA GPUs are needed. The code is developed and tested using 4 servers with 32 NVIDIA V100 GPU cards. Other platforms or GPU cards are not fully tested.
  3. Dataset

    • Prepare ImageNet data following pytorch example.
    • Optional: Generate lmdb dataset by utils/lmdb_dataset.py. If not, please overwrite dataset:imagenet1k_lmdb in yaml to dataset:imagenet1k.
    • The directory structure of $DATA_ROOT should look like this:
      ${DATA_ROOT}
      ├── imagenet
      └── imagenet_lmdb
      
  4. Miscellaneous

    • The codebase is a general ImageNet training framework using yaml config with several extension under apps dir, based on PyTorch.
      • YAML config with additional features
        • ${ENV} in yaml config.
        • _include for hierachy config.
        • _default key for overwriting.
        • xxx.yyy.zzz for partial overwriting.
      • --{{opt}} {{new_val}} for command line overwriting.

Acknowledgment

This repo is based on slimmable_networks and benefits from the following projects

Thanks the contributors of these repos!

Citation

If you find this work or code is helpful in your research, please cite:

@inproceedings{
    mei2020atomnas,
    title={Atom{\{}NAS{\}}: Fine-Grained End-to-End Neural Architecture Search},
    author={Jieru Mei and Yingwei Li and Xiaochen Lian and Xiaojie Jin and Linjie Yang and Alan Yuille and Jianchao Yang},
    booktitle={International Conference on Learning Representations},
    year={2020},
    url={https://openreview.net/forum?id=BylQSxHFwr}
}
You can’t perform that action at this time.