AAAI 2020 (Spotlight).
The code of the new extended work now is available. In the further, I will try to merge these two works so that making the whole project is more elegant.
Paper of the extended work will come soon.
This project is based on Regularized loss and PSA.
cd wrapper/bilateralfilter
swig -python -c++ bilateralfilter.i
python setup.py install
More details please see here
Google: due to the coronavirus outbreak in China, I will upload models after I can enter my lab. But you can download “[ilsvrc-cls_rna-a1_cls1000_ep-0001.params]” and “[res38_cls.pth]” from here.
[ilsvrc-cls_rna-a1_cls1000_ep-0001.params] is an init pretained model.
[res38_cls.pth] is a classification model pretrained on VOC 2012 dataset.
[RRM_final.pth] is the final model (AAAI).
[RRM(attention)_final.pth] is the final model of the new extended work (64.7 mIoU on Pascal Voc 2012 val set).
you need 4 GPUs and the pretrained model [ilsvrc-cls_rna-a1_cls1000_ep-0001.params]:
python train_from_init(attention).py --voc12_root /your/path/VOCdevkit/VOC2012
you only need 2 GPU and the pretrained model [res38_cls.pth]
python train_from_cls_weight(attention).py --IMpath /your/path/VOCdevkit/VOC2012/JPEGImages
I suggest that it is better to use the 2nd method due to lower computing costs.
you need 4 GPUs and the pretrained model [ilsvrc-cls_rna-a1_cls1000_ep-0001.params]:
python train_from_init.py --voc12_root /your/path/VOCdevkit/VOC2012
you only need 1 GPU and the pretrained model [res38_cls.pth]
python train_from_cls_weight.py --IMpath /your/path/VOCdevkit/VOC2012/JPEGImages
you need 1 GPU and the final model [RRM(attention)_final.pth]:
python infer_RRM.py --IMpath /your/path/VOCdevkit/VOC2012/JPEGImages
you need 1 GPU and the final model [RRM_final.pth]:
python infer_RRM.py --IMpath /your/path/VOCdevkit/VOC2012/JPEGImages