Skip to content

[ECCV-2022] Official implementation of MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition && Pytorch Implementations of some Self-Knowledge Distillation and data augmentation methods

winycg/Self-KD-Lib

Repository files navigation

This project provides the implementations of some data augmentation methods, regularization methods, online Knowledge distillation and Self-Knowledge distillation methods.

Installation

Requirements

Ubuntu 18.04 LTS

Python 3.8 (Anaconda is recommended)

CUDA 11.1

PyTorch 1.12 + torchvision 0.13

Perform experiments on CIFAR-100 dataset

Dataset

CIFAR-100 : download

unzip to the ./data folder

The commands for running various methods can be found in main_cifar.sh

Top-1 accuracy(%) of Self-KD and Data Augmentation (DA) methods on ResNet-18
Type Method Venue Accuracy(%)
Baseline Cross-entropy - 76.24
Self-KD DDGSD [1] AAAI-2019 76.61
DKS [2] CVPR-2019 78.64
SAD [3] ICCV-2019 76.40
BYOT [4] ICCV-2019 77.88
Tf-KD-reg [5] CVPR-2020 76.61
CS-KD [6] CVPR-2020 78.66
FRSKD [7] CVPR-2021 76.60
PS-KD [8] ICCV-2021 79.31
BAKE [9] arXiv:2104.13298 76.93
MixSKD [10] ECCV-2022 80.32
DA Label Smoothing [1] CVPR-2016 78.72
Virtual Softmax [2] NeurIPS-2018 78.54
Focal Loss [3] ICCV-2017 76.19
Maximum Entropy [4] ICLR Workshops 2017 76.50
Cutout [5] arXiv:1708.04552 76.66
Random Erase [6] AAAI-2020 76.75
Mixup [7] ICLR-2018 78.68
CutMix [8] ICCV-2019 80.17
AutoAugment [9] CVPR-2019 77.97
RandAugment [10] CVPR Workshops-2020 76.86
AugMix [11] arXiv:1912.02781 76.22
TrivalAugment [12] ICCV-2021 76.03

Some implementations are referred by the official code. Thanks the papers' authors for their released code. The results are reproduced by our released code, so it may be not strictly consistent with the original papers.

Perform experiments on ImageNet dataset

MixSKD Top-1 Accuracy Script Log Pretrained Model
ResNet-50 78.76 sh Baidu Cloud Baidu Cloud

Perform experiments of downstream object detection on COCO

Our implementation of object detection is based on MMDetection. Please refer the detailed guideline at https://github.com/winycg/detection.

Framework mAP Log Pretrained Model
Cascade-Res50 41.6 Baidu Cloud Baidu Cloud

Perform experiments on downstream semantic segmentation

The training script are based on our previous released segmentation codebase: https://github.com/winycg/CIRKD

Dataset mIoU Script Log Pretrained Model
ADE20K 42.37 sh Baidu Cloud Baidu Cloud
COCO-Stuff-164K 37.12 sh Baidu Cloud Baidu Cloud
Pascal VOC 78.78 sh Baidu Cloud Baidu Cloud

If you find this repository useful, please consider citing the following paper:

@inproceedings{yang2022mixskd,
  title={MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition},
  author={Yang, Chuanguang and An, Zhulin and Zhou, Helong and  Cai, Linhang and Zhi, Xiang and Wu, Jiwen and Xu, Yongjun and Zhang, Qian},
  booktitle={European Conference on Computer Vision},
  year={2022}
}

About

[ECCV-2022] Official implementation of MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition && Pytorch Implementations of some Self-Knowledge Distillation and data augmentation methods

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published