Skip to content

weeknan/DR-Tune

Repository files navigation

DR-Tune

https://arxiv.org/abs/2308.12058

This repository is an official PyTorch implementation of DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration (ICCV2023).

DR-Tune

Usage

Environments

  • python 3.9.7
  • pytorch 1.13.1
  • torchvision 0.14.1
  • GPU NVIDIA GeForce RTX 2080 Ti

Dataset preparation

The datasets used in Table 1 can be downloaded via their official link.

The datasets used in Table 2 can be downloaded from here and see "vtab-1k".

Pretrained model preparation

The pretrained model checkpoints used in the paper can be found in the table below.

Please put the checkpoint in ./pretrained_models.

Backbone architecture Pretraining strategy Url
ViT-B Classification Checkpoint
ViT-B MAE Checkpoint
ViT-L MAE Checkpoint
ResNet-50 MoCo-v1 Checkpoint
ResNet-50 MoCo-v2 Checkpoint
ResNet-50 PCL Checkpoint
ResNet-50 HCSC Checkpoint
ResNet-50 SwAV Checkpoint
ResNet-50/101/152 InfoMin Checkpoint
ResNeXt-101/152 InfoMin Checkpoint

Training

Fine-tuning a ResNet-50 pretrained by MoCo-v2 on CIFAR10.

CIFAR10 will be automatically downloaded to ./data.

bash train.sh 1 --cfg ./configs/cifar10_k2048_lr001.yaml

Citation

If you find our work helpful in your research, please cite it as:

@inproceedings{zhou2023dr,
  title={DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration},
  author={Zhou, Nan and Chen, Jiaxin and Huang, Di},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={1547--1556},
  year={2023}
}

About

[ICCV2023] DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published