This is the official implementation for the paper few-shot keypoint detection with uncertainty learning for unseen species (CVPR2022).
For convenience, we show how to train and test the FSKD model only in Animal pose dataset.
- Python 3.8.5
- Pytorch 1.7.0
- Download dataset.
Since the official Animal pose dataset has been corrected multiple times by its author due to noisy annotations, the current official one is different from the one which we used. Moreover, the annotation format is different, too. Therefore, we upload the Animal pose dataset that we used on the cloud and please use this one. The animal pose dataset should have the folder structure as follows:
|--Animal_Dataset_Combined
|--gt
|--images
|--readme.txt
- Modify the dataset path in "annotation_prepare.py" and run this python file to generate the local annotation files. An "annotation_prepare" folder will appear and there are five json files generated as:
|--annotation_prepare
|-- cat.json
|-- dog.json
|-- cow.json
|-- horse.json
|-- sheep.json
-
Generate saliency maps using the pre-trained saliency detector SCRN. The saliency map is used to prune auxiliary keypoints out of foreground region.
-
Modify the 'saliency_maps_root' in dict 'opts', and run 'main.py'.
- Modify the paths in "eval.py" and run it.
- The testing result may have some variations because it is tested using episodes. Moreover, FSKD is very challenging, so the detector may pose some uncertainty in novel keypoint detection. The scores would be more stable if more episodes are tested.
- The current pipeline is the starting of FSKD. We believe the code will be better along with the research progress of FSKD in the future.
If you use our code for your research, please cite our paper. Many thanks!
@inproceedings{lu2022few,
title={Few-shot keypoint detection with uncertainty learning for unseen species},
author={Lu, Changsheng and Koniusz, Piotr},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={19416--19426},
year={2022}
}