Code used for the results in the paper "ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning"
- CUDA/CUDNN
- Python3
- Packages found in requirements.txt
Download the dataset from the Cityscapes dataset server(Link). Download the files named 'gtFine_trainvaltest.zip', 'leftImg8bit_trainvaltest.zip' and extract in ../data/CityScapes/
Download the dataset from here. Download the file 'training/validation data' under 'Development kit' and extract in ../data/VOC2012/. For training, you will also need to download additional labels from this link, extract this directory into ../data/VOC2012.
Arguments related to running the script are specified from terminal and include; number of gpus to use (if >1 torch.nn.DataParalell is used), path to configuration file (see below), path to .pth file if resuming training, name of the experiment, and whether to save images during training. More details can be found in the relevant scripts.
Arguments related to the algoritms are specified in the configuration files. These include model, data, hyperparameters related to the training, and what methods to apply on unlabeled data. A full description is provided further below.
python3 trainSSL.py --config ./configs/configCityscapes.json --name name_of_training
python3 trainSSL.py --resume path/to/checkpoint.pth --name name_of_training
python3 evaluateSSL.py --model-path path/to/checkpoint.pth
Here is a model trained with SSL with 1/8 (372) labeled samples for Cityscapes.