Skip to content
Branch: master
Find file Copy path
Find file Copy path
2 contributors

Users who have contributed to this file

@mingxingtan @saberkun
42 lines (26 sloc) 2.23 KB


[1] Mingxing Tan and Quoc V. Le. MixNet: Mixed Depthwise Convolutional Kernels. BMVC 2019.

1. About MixNet

MixNets are a family of mobile-sizes image classification models equipped with MDConv, a new type of mixed depthwise convolutions. They are developed based on AutoML MNAS Mobile framework, with an extended search space including MDConv. Currently, MixNets achieve better accuracy and efficiency than previous mobile models. In particular, our MixNet-L achieves a new state-of-the-art 78.9% ImageNet top-1 accuracy under typical mobile FLOPS (<600M) constraint:

2. Using Pretrained Checkpoints

We have provided a list of EfficientNet checkpoints for MixNet-S, MixNet-M, and MixNet-L. A quick way to use these checkpoints is to run:

$ export MODEL=mixnet-s
$ wget${MODEL}.tar.gz
$ tar zxf ${MODEL}.tar.gz
$ wget -O panda.jpg
$ wget
$ python --model_name=$MODEL --ckpt_dir=$MODEL --example_img=panda.jpg --labels_map_file=labels_map.txt

Please refer to the following colab for more instructions on how to obtain and use those checkpoints.

  • mixnet_eval_example.ipynb: A colab example to load pretrained checkpoints files and use the restored model to classify images.

3. Training and Evaluating MixNets.

MixNets are trained using the same hyper parameters as MnasNet, except specifying different model_name=mixnet-s/m/l.

For more instructions, please refer to the MnasNet tutorial:

You can’t perform that action at this time.