The pytorch code of "Generalized Weakly Supervised Object Localization". The code was based on code https://github.com/ZJULearning/AttentionZSL and code https://github.com/junsukchoe/ADL . Thanks for their nice job!
The project needs 1 NVIDIA 1080TI, python=3.6
pytorch=1.0.1
opencv-python=3.4.3.18
matplotlib=3.2.1
numpy=1.16.1
Pillow=6.1.0
PyYAML=5.3
scikit-learn=0.22.2.post1
Experiments are conducted on AwA2 https://cvml.ist.ac.at/AwA2/, CUB http://www.vision.caltech.edu/visipedia/CUB-200.html datasets. When you download the dataset, you put the data into "/your_home_root/gwsol/data/".
--AwA2
--JPEGImages --proposed_split --classes.txt --predicate-matrix-continuous.txt ...
--CUB
--JPEGImages --proposed_split --classes.txt --predicate-matrix-continuous.txt ...
CUB:
python experiments/run_trainer.py --cfg ./configs/hybrid/VGG19_CUB_PS_C.yaml
AWA2:
python experiments/run_trainer.py --cfg ./configs/hybrid/VGG19_AwA2_PS_C.yaml
Before you test your model, you should change the "ckpt_name" in the "/Your_Home_Root/gwsol/configs/hybird/VGG19_CUB_PS_C.yaml", such as "VGG19_CUB_PS_C_2021-03-02-13-46"
C setting:
python experiments/run_evaluator_hybrid.py --cfg ./configs/hybrid/VGG19_CUB_PS_C.yaml
G setting:
python experiments/run_evaluator_hybrid.py --cfg ./configs/hybrid/VGG19_CUB_PS_G.yaml
C setting:
python experiments/run_evaluator_hybrid.py --cfg ./configs/hybrid/VGG19_AwA2_PS_C.yaml
G setting:
python experiments/run_evaluator_hybrid.py --cfg ./configs/hybrid/VGG19_AwA2_PS_G.yaml
In order to get metrics on the AwA2 dataset, we manually labeled the test dataset of AwA2, you can download the annotation from the dropbox https://www.dropbox.com/scl/fo/jbzry4jrad1800rkr71nb/h?dl=0&rlkey=wdnz6ptsedfl9umgpjvreolv9, Please put them into the folder "You_Home_Root/gwsol/loc_evaluation/awa2/", like "You_Home_Root/gwsol/loc_evaluation/awa2/test_seen_gt/"
If you have some questions about this project, please contact me, my email is wyzeng2019@gmail.com