Skip to content

Westlake-AI/AutoMix

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

We propose a novel automatic mixup (AutoMix) framework, where the mixup policy is parameterized and serves the ultimate classification goal directly. Specifically, AutoMix reformulates the mixup classification into two sub-tasks (i.e., mixed sample generation and mixup classification) with corresponding sub-networks and solves them in a bi-level optimization framework. For the generation, a learnable lightweight mixup generator, Mix Block, is designed to generate mixed samples by modeling patch-wise relationships under the direct supervision of the corresponding mixed labels. To prevent the degradation and instability of bi-level optimization, we further introduce a momentum pipeline to train AutoMix in an end-to-end manner. Extensive experiments on nine image benchmarks prove the superiority of AutoMix compared with state-of-the-arts in various classification scenarios and downstream tasks.

Catalog

We plan to update this timm implementation of AutoMix in a few months. Please watch us for the latest release or use our OpenMixup implementations.

  • Image Classification Code with OpenMixup [code]
  • CIFAR-10/100 and Tiny-ImageNet Training and Validation Code with timm [code]
  • ImageNet-1K Training and Validation Code [code]
  • Image Classification on Google Colab and Notebook Demo

Installation

Please check INSTALL.md for installation instructions.

Small-scale Image Classification

Please refer to OpenMixup implementations of CIFAR-100 and Tiny-ImageNet.

ImageNet Classification

1. Training and Validation

See TRAINING.md for ImageNet-1K training and validation instructions, or refer to our OpenMixup implementations. We released pre-trained models on OpenMixup.

2. ImageNet-1K Trained Models

Please refer to mixup_benchmarks in OpenMixup implementations for results and models.

(back to top)

License

This project is released under the Apache 2.0 license.

Acknowledgement

Our implementation is mainly based on the following codebases. We gratefully thank the authors for their wonderful works.

  • pytorch-image-models: PyTorch image models, scripts, pretrained weights.
  • OpenMixup: CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark.

Citation

If you find this repository helpful, please consider citing:

@InProceedings{liu2022automix,
      title={AutoMix: Unveiling the Power of Mixup for Stronger Classifiers},
      author={Zicheng Liu and Siyuan Li and Di Wu and Zhiyuan Chen and Lirong Wu and Jianzhu Guo and Stan Z. Li},
      booktitle={European Conference on Computer Vision},
      pages={441--458},
      year={2022},
}

(back to top)