Skip to content

zhangyifei01/MoCo-v2-SupContrast

Repository files navigation

MoCo v2+SupContrast

The officially released code of SupContrast is based on SimCLR, while this repository is based on MoCo v2 for supervised contrastive learning.

[2020-CVPR] MoCo: Momentum Contrast for Unsupervised Visual Representation Learning [code]

[2020-NeurIPS] Supervised Contrastive Learning [code]

Unsupervised Training on Cifar-10

This implementation only supports multi-gpu, DistributedDataParallel training, which is faster and simpler; single-gpu or DataParallel training is not supported.

There are three frameworks: main_moco_in, main_moco_out, main_moco_suploss.

To do unsupervised pre-training of a ResNet-50 model on Cifar-10 in an 2-gpu machine, run:

CUDA_VISIBLE_DEVICES=0,1  python main_moco_out.py \
  -a resnet50 \
  --lr 0.12 \
  --batch-size 256 --moco-k 4096 \
  --mlp --moco-t 0.2 --aug-plus --cos \
  --dist-url 'tcp://localhost:10013' --multiprocessing-distributed --world-size 1 --rank 0 \
  data/ [your imagenet-folder with train and val folders]

Linear Classification

With a pre-trained model, to train a supervised linear classifier on frozen features/weights in an 2-gpu machine, run:

CUDA_VISIBLE_DEVICES=0,1 python main_lincls.py \
  -a resnet50 \
  --lr 1.0 \
  --batch-size 256 \
  --pretrained checkpoint_0999.pth.tar \
  --dist-url 'tcp://localhost:10014' --multiprocessing-distributed --world-size 1 --rank 0 \
  data/ [your imagenet-folder with train and val folders]

Comparison

Linear classification results on Cifar-10 with (2 or 8) 2080Ti GPUs :

pre-train
epochs
pre-train
batch-size
ResNet-50
top-1 acc.
ResNet-50
top-5 acc.
SupCrossEntropy 500 1024 95.0 -
SupContrast 1000 1024 96.0 -
SupContrast (Our Rerun) 1000 1024 95.6 -
MoCo v2 200 256 87.4 99.6
MoCo v2 + SupContrast_In 200 256 95.4 99.9
MoCo v2 1000 256 93.6 99.8
MoCo v2 + SupContrast_In 1000 256 96.1 99.8
MoCo v2 + SupContrast_Out 1000 256 96.1 99.9
MoCo v2 + SupContrast_Suploss 1000 256 96.0 99.9

Linear classification results on Cifar-100 with 2 2080Ti GPUs :

pre-train
epochs
pre-train
batch-size
ResNet-50
top-1 acc.
ResNet-50
top-5 acc.
SupCrossEntropy 500 1024 75.3 -
SupContrast 1000 1024 76.5 -
MoCo v2 + SupContrast_Out 1000 256 77.3 93.0
MoCo v2 + SupContrast_Suploss 1000 256 78.0 93.2

Reference

This repository references papers MoCo, MoCo v2 and SupContrast:

@Article{he2019moco,
  author  = {Kaiming He and Haoqi Fan and Yuxin Wu and Saining Xie and Ross Girshick},
  title   = {Momentum Contrast for Unsupervised Visual Representation Learning},
  journal = {arXiv preprint arXiv:1911.05722},
  year    = {2019},
}
@Article{chen2020mocov2,
  author  = {Xinlei Chen and Haoqi Fan and Ross Girshick and Kaiming He},
  title   = {Improved Baselines with Momentum Contrastive Learning},
  journal = {arXiv preprint arXiv:2003.04297},
  year    = {2020},
}
@Article{khosla2020supervised,
  author  = {Prannay Khosla and Piotr Teterwak and Chen Wang and Aaron Sarna and Yonglong Tian and Phillip Isola and Aaron Maschinot and Ce Liu and Dilip Krishnan},
  title   = {Supervised Contrastive Learning},
  journal = {arXiv preprint arXiv:2004.11362},
  year    = {2020},
}

Releases

No releases published

Packages

No packages published