Skip to content

1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundation visual backbones.

License

Notifications You must be signed in to change notification settings

LeapLabTHU/EfficientTrain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EfficientTrain++ (TPAMI 2024 & ICCV 2023)

This repo releases the code and pre-trained models of EfficientTrain++, an off-the-shelf, easy-to-implement algorithm for the efficient training of foundation visual backbones.

[TPAMI 2024] EfficientTrain++: Generalized Curriculum Learning for Efficient Visual Backbone Training
Yulin Wang, Yang Yue, Rui Lu, Yizeng Han, Shiji Song, and Gao Huang
Tsinghua University, BAAI
[arXiv]

[ICCV 2023] EfficientTrain: Exploring Generalized Curriculum Learning for Training Visual Backbones
Yulin Wang, Yang Yue, Rui Lu, Tianjiao Liu, Zhao Zhong, Shiji Song, and Gao Huang
Tsinghua University, Huawei, BAAI
[arXiv]

  • Update on 2024.05.14: I'm highly interested in extending EfficientTrain++ to CLIP-style models, multi-modal large language models, generative models (e.g., diffusion-based or token-based), and advanced visual self-supervised learning methods. I'm always open to discussions and potential collaborations. If you are interested, please kindly send an e-mail to me (wang-yl19@mails.tsinghua.edu.cn).

Overview

We present a novel curriculum learning approach for the efficient training of foundation visual backbones. Our algorithm, EfficientTrain++, is simple, general, yet surprisingly effective. As an off-the-shelf approach, it reduces the training time of various popular models (e.g., ResNet, ConvNeXt, DeiT, PVT, Swin, CSWin, and CAFormer) by 1.5−3.0× on ImageNet-1K/22K without sacrificing accuracy. It also demonstrates efficacy in self-supervised learning (e.g., MAE).

Highlights of our work

  • 1.5−3.0× lossless training or pre-training speedup on ImageNet-1K and ImageNet-22K. Practical efficiency aligns with theoretical performance. Both upstream and downstream performance are not affected.
  • Effective for diverse visual backbones, including ConvNets, isotropic/multi-stage ViTs, and ConvNet-ViT hybrid models. For example, ResNet, ConvNeXt, DeiT, PVT, Swin, CSWin, and CAFormer.
  • Dramatically improving the performance of relatively smaller models (e.g., on ImageNet-1K, DeiT-S: 80.3% -> 81.3%, DeiT-T: 72.5% -> 74.4%).
  • Superior performance across varying training budgets (e.g., training cost of 0 - 300 epochs or more).
  • Applicable to both supervised learning and self-supervised learning (e.g., MAE).
  • Optional techniques optimized for limited CPU/memory capabilities (e.g., cannot support high data pre-processing speed).
  • Optional techniques optimized for large-scale parallel training (e.g., 16-64 GPUs or more).

Catalog

  • ImageNet-1K Training Code
  • ImageNet-1K Pre-trained Models
  • ImageNet-22K -> ImageNet-1K Fine-tuning Code
  • ImageNet-22K Pre-trained Models
  • ImageNet-22K -> ImageNet-1K Fine-tuned Models

Installation

We support PyTorch>=2.0.0 and torchvision>=0.15.1. Please install them following the official instructions.

Clone this repo and install the required packages:

git clone https://github.com/LeapLabTHU/EfficientTrain
pip install timm==0.4.12 tensorboardX six

The instructions for preparing ImageNet-1K/22K datasets can be found here.

Training

See TRAINING.md for the training instructions.

Pre-trained models & evaluation & fine-tuning

See EVAL.md for the pre-trained models and the instructions for evaluating or fine-tuning them.

Results

Supervised learning on ImageNet-1K

ImageNet-22K pre-training

Supervised learning on ImageNet-1K (varying training budgets)

Object detection and instance segmentation on COCO

Semantic segmentation on ADE20K

Self-supervised learning results on top of MAE

TODO

This repo is still being updated. If you need anything, no matter it is listed in the following or not, please send an e-mail to me (wang-yl19@mails.tsinghua.edu.cn).

  • A detailed tutorial on how to implement this repo to train (customized) models on customized datasets.
  • ImageNet-22K Training Code
  • ImageNet-1K Self-supervised Learning Code (EfficientTrain + MAE)
  • EfficientTrain + MAE Pre-trained Models

Acknowledgments

This repo is mainly developed on the top of ConvNeXt, we sincerely thank them for their efficient and neat codebase. This repo is also built using DeiT and timm.

Citation

If you find this work valuable or use our code in your own research, please consider citing us:

@article{wang2024EfficientTrain_pp,
        title = {EfficientTrain++: Generalized Curriculum Learning for Efficient Visual Backbone Training},
       author = {Wang, Yulin and Yue, Yang and Lu, Rui and Han, Yizeng and Song, Shiji and Huang, Gao},
      journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
         year = {2024},
          doi = {10.1109/TPAMI.2024.3401036}
}
@inproceedings{wang2023EfficientTrain,
        title = {EfficientTrain: Exploring Generalized Curriculum Learning for Training Visual Backbones},
       author = {Wang, Yulin and Yue, Yang and Lu, Rui and Liu, Tianjiao and Zhong, Zhao and Song, Shiji and Huang, Gao},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
         year = {2023}
}

About

1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundation visual backbones.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages