Skip to content

qinyideng/ccl

Repository files navigation

Contrastive Complementary Labeling

This repository is the official implementation of Boosting Semi-Supervised Learning with Contrastive Complementary Labeling.

The key contributions of this paper are as follows:

  • We propose a novel Contrastive Complementary Labeling (CCL) method that constructs reliable negative pairs based on the complementary labels, i.e., the classes that a sample does not belong to. Indeed, our CCL effectively exploits low-confidence samples to provide additional information for the training process.
  • We develop a complementary labeling strategy to construct reliable negative pairs. Specifically, for lowconfidence data, we first select multiple classes with the lowest probability as complementary labels. Then reliable complementary labels are applied to construct a large number of negative pairs, which greatly benefits the contrastive learning.
  • Extensive experiments on multiple benchmark datasets show that CCL can effectively improve the performance of existing SSL methods based on pseudo labels. Besides, under the label-scarce settings, CCL effectively unleashes the power of low-confidence samples. For example, in CIFAR-10 with 10, 20, and 40 labeled data, compared to FixMatch, CCL can improve the performance of FixMatch by 5.63%, 2.93%, and 2.43%, respectively.

1. Requirements

  • To install requirements:
pip install -r requirements.txt

2. Training

  • The configuration of CCL methods can be found in the directory config. By default, CIFAR10, SVHN, and STL-10 use a single GPU for training, while CIFAR-100 uses two GPUs for training. For example, to train CCL-FixMatch on CIFAR-10 with 40 labeled data, we can run this command:
python ccl_fixmatch.py --c config/ccl_fixmatch/cifar10/ccl_fixmatch_cifar10_40_seed0.yaml

To train CCL-FlexMatch on STL-10 with 20 labeled data, we can run this command:

python ccl_flexmatch.py --c config/ccl_flexmatch/stl10/ccl_flexmatch_stl10_20_seed0.yaml

3. Pretrained models

  • We release our CCL pretrained models here. The schema of our pretrained models includes: {'model', 'optimizer', 'scheduler', 'it', 'ema_model', 'best_eval_acc', 'best_eval_iter'}, where 'it' denotes current iterations. 'best_eval_acc' denotes the best top-1 accuracy. 'best_eval_iter' denotes the iterations when obtaining the best top-1 accuracy.

  • CCL is helpful for FixMatch/FlexMatch for all benchmarks and CCL achieves better performance when the task contains more noise (i.e., fewer labels).

  • Under the label-scare setting, compare to FixMatch/FlexMatch, CCL-FixMatch/CCL-FlexMatch significantly improves the accuracy.

Method CIFAR-10 CIFAR-100 STL-10
10 20 40 200 400 10 20
CCL-FixMatch 74.92 download 89.98 download 95.07 download 43.83 download 54.41 download 47.76 download 53.23 download
CCL-FlexMatch 94.83 download 94.99 download 95.12 download 52.51 download 62.20 download 50.98 download 58.36 download

CCL can also achieve a certain accuracy gain effect under the more labeled data settings.

Method CIFAR-10 CIFAR-100
250 4000 2500 10000
CCL-FixMatch 95.18 download 95.87 download 72.19 download 78.11 download
CCL-FlexMatch 95.33 download 95.92 download 73.77 download 78.17 download
Method STL-10 SVHN
40 250 1000 40 250 1000
CCL-FixMatch 71.38 download 92.25 download 94.13 download 98.04 download 98.04 download 98.11 download
CCL-FlexMatch 76.34 download 91.90 download 94.39 download 96.69 download 96.91 download 95.64 download

4. Citaiton

  • If you find our work inspiring or use our codebase in your research, please cite our work.
@article{deng2022boosting, 
  title={Boosting Semi-Supervised Learning with Contrastive Complementary Labeling}, 
  author={Deng, Qinyi and Guo, Yong and Yang, Zhibang and Pan, Haolin and Chen, Jian}, 
  journal={arXiv preprint arXiv:2212.06643}, 
  year={2022} 
}

5. Acknowledgements

  • The project is developed based on TorchSSL.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages