Skip to content

cvlab-yonsei/RankMixup

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyTorch implementation of "RankMixup: Ranking-Based Mixup Training for Network Calibration"

This is the implementation of the paper "RankMixup: Ranking-Based Mixup Training for Network Calibration (ICCV 2023)".

For more information, checkout our project site [website] or our paper [PDF].

Dependencies

  • Python >= 3.8
  • PyTorch == 1.8.1

Datasets

For CIFAR-10, the dataset will be automatically downloaded when you run the code. For the others (Tiny-Imagenet, CUB-200 and VOC 2012), please download the datasets using the official cites. And please add the absolute path of the data directory for the corresponding data configs located in configs/data.

Installation

pip install -e .

Training

The config files for our loss functions are in configs/loss/mrl.yaml or mndcg.yaml.

Tiny-ImageNet

You can easily train your own model like:

sh train_tiny.sh 

or You can freely define parameters with your own settings like:

CUDA_VISIBLE_DEVICES=0 python tools/train_net.py \
  log_period=100 \
  data=tiny_imagenet \
  data.data_root='your_dataset_directory' \
  model=resnet50_mixup_tiny model.num_classes=200 \
  loss=mrl or mndcg \
  optim=sgd optim.lr=0.1 optim.momentum=0.9 \
  scheduler=multi_step scheduler.milestones="[40, 60]" \
  train.max_epoch=100 

CIFAR10

You can easily train your own model like:

sh train_cifar.sh 

or You can freely define parameters with your own settings like:

CUDA_VISIBLE_DEVICES=0 python tools/train_net.py \
  log_period=100 \
  data=cifar10 \
  data.data_root='your_dataset_directory' \
  model=resnet50_mixup model.num_classes=10 \
  loss=mrl or mndcg \
  optim=sgd optim.lr=0.1 optim.momentum=0.9 \
  scheduler=multi_step scheduler.milestones="[80, 120]" \
  train.max_epoch=200 

Testing

Tiny-ImageNet

CUDA_VISIBLE_DEVICES=0 python tools/test_net.py \
    log_period=100 \
    data=tiny_imagenet \
    data.data_root='your_dataset_directory' \
    model=resnet50_mixup_tiny model.num_classes=200 \
    loss=mrl or mndcg \
    hydra.run.dir='your_bestmodel_directory' \
    test.checkpoint=best.pth \

CIFAR10

CUDA_VISIBLE_DEVICES=0 python tools/test_net.py \
    log_period=100 \
    data=cifar10 \
    data.data_root='your_dataset_directory' \
    model=resnet50_mixup model.num_classes=200 \
    loss=mrl or mndcg \
    hydra.run.dir='your_bestmodel_directory' \
    test.checkpoint=best.pth \

Bibtex

@inproceedings{noh2023rankmixup,
  title={RankMixup: Ranking-Based Mixup Training for Network Calibration},
  author={Noh, Jongyoun and Park, Hyekang and Lee, Junghyup and Ham, Bumsub},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={1358--1368},
  year={2023}
}

References

Our code is mainly based on FLSD and MbLS. For long-tailed(LT) datasets, we borrow the codes from MisLAS. Thanks to the authors!

About

An official implementation of "RankMixup: Ranking-Based Mixup Training for Network Calibration" (ICCV 2023) in PyTorch.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published