Skip to content

pangzss/PixCon

Repository files navigation

To Do

  • Upload Code.
  • Upload Checkpoint.

Installation

cd PixCon
conda env create -f environment.yml
conda activate pixcon

Pre-training on COCO

Step 0. Download COCO dataset. You can also download the unlabeled set, which combined with the previous downloaded set becomes COCO+.

Step 1. Configure the data path. In configs/selfsup/base/datasets/coco_coord.py, modify the data path to COCO

data = dict(
    train=dict(
        main_dir='/data/path/to/coco',
        coco_plus_dir="/data/path/to/unlabeled_set"
    ))

Step 2. Run pre-training script.

./scripts/pixcon_sr_resnet50_coco_800ep.sh

The pre-training is done on 4 GPUs by default.

Preparing for Evaluation

Step 0. Prepare datasets for detection.

ln -s /data/path/to/coco tools/benchmarks/detectron2/datasets/coco
ln -s /data/path/to/VOC2007 tools/benchmarks/detectron2/datasets/VOC2007
ln -s /data/path/to/VOC2012 tools/benchmarks/detectron2/datasets/VOC2012

Step 1. Prepare datasets for segmentation.

We follow mmsegmentation's instruction for dataset preparation. But for it to work under the mmselfsup framework, we need a symbolic link to the datasets

mkdir data && cd data
ln -s /data/path/to/seg_dataset ${DATASET_NAME}

Step 2. Extract backbone weights.

python tools/extract_backbone_weights.py \
work_dirs/selfsup/${EXP_NAME}/epoch_800.pth \
work_dirs/selfsup/${EXP_NAME}/backbone.pth

Step 3. Convert backbone to detectron2 format.

python tools/benchmarks/detectron2/convert-pretrain-to-detectron2.py \
work_dirs/selfsup/${EXP_NAME}/backbone.pth \
work_dirs/selfsup/${EXP_NAME}/detectron2.pkl

Object Detection

We use detectron2 for fine-tuning the pre-trained models on VOC detection and COCO detection tasks. All the evaluations are done on 4 gpus.

VOC Detection

bash tools/benchmarks/detectron2/run.sh \
configs/benchmarks/detectron2/pascal_voc_R_50_C4_24k_moco.yaml \
work_dirs/selfsup/${EXP_NAME}/detection.pkl \
work_dirs/selfsup/benchmarks/detectron2/voc12/${EXP_NAME}

COCO Detection

bash tools/benchmarks/detectron2/run.sh \
configs/benchmarks/detectron2/coco_R_50_FPN_1x_moco.yaml \
work_dirs/selfsup/${EXP_NAME}/detection.pkl \
work_dirs/selfsup/benchmarks/detectron2/coco/${EXP_NAME}

Semantic Segmentation

We use mmsegmentation for fine-tuning pre-trained models on semantic segmentations tasks. All the fine-tunings have been conducted on 4 GPUS with a total batch size of 16.

VOC (aug) Segmentation

bash tools/benchmarks/mmsegmentation/mim_dist_train.sh \
configs/benchmarks/mmsegmentation/voc12aug/fcn_d6_r50-d16_513x513_30k_voc12aug_moco.py \
work_dirs/selfsup/${EXP_NAME}/backbone.pth \
4 \
--work-dir work_dirs/selfsup/benchmarks/mmseg/voc12aug/${EXP_NAME}

Cityscapes Segmentation

bash tools/benchmarks/mmsegmentation/mim_dist_train.sh \
configs/benchmarks/mmsegmentation/cityscapes/fcn_d6_r50-d16_769x769_90k_cityscapes_moco.py \
work_dirs/selfsup/${EXP_NAME}/backbone.pth \
4 \
--work-dir work_dirs/selfsup/benchmarks/mmseg/cityscapes/${EXP_NAME}

About

PyTorch code for "Revisiting Pixel-Level Contrastive Pre-Training on Scene Images"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors