Official repository of the paper Adv-SSL: Adversarial Self-Supervised Representation Learning with Theoretical Guarantees.
The checkpoints are stored in the models directory every 100 epochs during training and are used to be reproduced the reported results. The dataset directory contains the code for reading various datasets, while the eval directory includes code related to evaluation. The method directory contains the implementation of ASSRL, and model.py defines the encoder structure.
- CIFAR-10
- CIFAR-100
- Tiny ImageNet
| Method | CIFAR-10 (Linear) | CIFAR-10 (k-nn) | CIFAR-100 (Linear) | CIFAR-100 (k-nn) | Tiny ImageNet (Linear) | Tiny ImageNet (k-nn) |
|---|---|---|---|---|---|---|
| Barlow Twins | 87.32 | 84.74 | 55.88 | 46.41 | 41.52 | 27.00 |
| Beyond Separability | 86.95 | 82.04 | 56.48 | 48.62 | 41.04 | 31.58 |
| ASSRL | 93.01 | 90.97 | 68.94 | 58.50 | 50.21 | 37.40 |
All experiments were conducted using a single Tesla V100 GPU unit. The torch version is 2.2.1+cu118 and the CUDA version is 11.8.
To reproduce the results presented in the repository, acquiring Tiny ImageNet from this repo is necessary. Otherwise, the model is unlikely to reach a top-1 accuracy of 1% by the end of training.
Detailed settings are good by default, to see all options:
python -m train --help
python -m test --help
Use following commands to run ASSRL across various datasets:
python -m train --dataset cifar10
python -m train --dataset cifar100
python -m train --dataset tiny_in --lr 2e-3
python -m train --dataset cifar10 --method barlow_twins
python -m train --dataset cifar100 --method barlow_twins
python -m train --dataset tiny_in --lr 2e-3 --method barlow_twins
python -m train --dataset cifar10 --method haochen22
python -m train --dataset cifar100 --method haochen22
python -m train --dataset tiny_in --lr 2e-3 --method haochen22
This implementation framework is based on this repo, while the implementation for Barlow Twins refers to this repo.