Pytorch implementation of Contrastive Attraction and Contrastive Repulsion for Representation Learning.
CACR is a distributional self-supervised learning method. Both positive samples and negative samples have their distribution in the representation space. CACR leverages a Bayesian strategy to align the positive distribution and distinguish the negative distribution.
This repository contains three folders, respectively corresponds to our experiments on small-scale (on both balanced/imbalanced) datasets, large-scale standard dataset (ImageNet) and large-scale label-shifted dataset.
-
To reproduce our results on small-scale experiments (CIFAR10/CIFAR100/STL10), please refer to small_scale_experiments folder.
-
To reproduce our main results on standard large-scale experiments (ImageNet1K), please refer to imagenet_pretraining folder.
-
To reproduce our results on small-scale experiments (ImageNet22K/Webvision -> ImageNet1K), please refer to large_scale_shift_experiments folder.
More details regarding the training configuration and running command are explained in the README under each subfolder.
model | pretrain epochs |
linear acc |
checkpoint |
---|---|---|---|
ResNet50 | 1000 | 74.7 | download |
ViT-Base | 300 | 77.1 | download |
Please feel free to check our learned representation performance in Image in the Wild Challenge.
Please cite our work if you find it is helpful. Thank you!
@article{
zheng2023contrastive,
title={Contrastive Attraction and Contrastive Repulsion for Representation Learning},
author={Huangjie Zheng and Xu Chen and Jiangchao Yao and Hongxia Yang and Chunyuan Li and Ya Zhang and Hao Zhang and Ivor Tsang and Jingren Zhou and Mingyuan Zhou},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2023},
url={https://openreview.net/forum?id=f39UIDkwwc},
}