Skip to content

JiakangYuan/HelixFormer

Repository files navigation

Learning Cross-Image Object Semantic Relation in Transformer for Few-Shot Fine-Grained Image Classification

If you find our code or paper useful to your research work, please consider citing our work using the following bibtex:

@inproceedings{zhang2022learning,
  title={Learning Cross-Image Object Semantic Relation in Transformer for Few-Shot Fine-Grained Image Classification},
  author={Zhang, Bo and Yuan, Jiakang and Li, Baopu and Chen, Tao and Fan, Jiayuan and Shi, Botian},
  booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
  pages={2135--2144},
  year={2022}
}

Abstract

Few-shot fine-grained learning aims to classify a query image into one of a set of support categories with fine-grained differences. Although learning different objects' local differences via Deep Neural Networks has achieved success, how to exploit the query-support cross-image object semantic relations in Transformer-based architecture remains under-explored in the few-shot fine-grained scenario. In this work, we propose a Transformer-based double-helix model, namely HelixFormer, to achieve the cross-image object semantic relation mining in a bidirectional and symmetrical manner. The HelixFormer consists of two steps: 1) Relation Mining Process (RMP) across different branches, and 2) Representation Enhancement Process (REP) within each individual branch. By the designed RMP, each branch can extract fine-grained object-level Cross-image Semantic Relation Maps (CSRMs) using information from the other branch, ensuring better cross-image interaction in semantically related local object regions. Further, with the aid of CSRMs, the developed REP can strengthen the extracted features for those discovered semantically-related local regions in each branch, boosting the model's ability to distinguish subtle feature differences of fine-grained objects. Extensive experiments conducted on five public fine-grained benchmarks demonstrate that HelixFormer can effectively enhance the cross-image object semantic relation matching for recognizing fine-grained objects, achieving much better performance over most state-of-the-art methods under 1-shot and 5-shot scenarios.

Code environment

This code requires Pytorch 1.7.1 and torchvision 0.8.2 or higher with cuda support. It has been tested on Ubuntu 18.04.

For detailed dependencies, please see our requirements.txt.

Setting up data

You must first specify the value of DEFAULT_ROOT in ./datasets/datasets.py. This should be the absolute path of the datasets.

The following datasets are used in our paper:

Train and Test

For fine-grained few-shot classification, we provide the training and inference code for both HelixFormer and our Relation Network baseline, as they appear in the paper.

Training a model can be simply divided into two stages:

  • Stage one: Pretraining backbone, run the following command line
python train_classifier.py

datasets and backbone can be changed in ./configs/train_classifier.yaml

  • Stage two: Meta-train HelixFormer, run the following command line
python train_meta_helix_transformer.py 

datasets/backbone/HelixTransformer model and other configs can be changes in ./configs/train_helix_transformer.yaml

The trained model can be tested by running the following command line:

python test_helix_transformer.py

datasets/model path and other configs can be changed in ./configs/test_helix_transformer.yaml

You can also train/test our meta baseline(Relation Network [paper]) by running the following command line

# train
python train_classifier.py
python train_baseline.py
# test
python test_baseline.py

Selected few-shot classification results and Feature Visualization

Here we quote some performance comparisons from our paper on CUB, Stanford Cars, Stanford Dogs, NABirds, Aircraft and CUB → NABirds.

Table 1. Performance on Stanford Cars, Stanford Dogs, NABirds

Method Setting backbone Stanford Dogs Stanford Cars NABirds
1-shot 5-shot 1-shot 5-shot 1-shot 5-shot
RelationNet (CVPR-18) In. Conv-4 43.29±0.46 55.15±0.39 47.79±0.49 60.60±0.41 64.34±0.81 77.52±0.60
CovaMNet (AAAI-19) In. Conv-4 49.10±0.76 63.04±0.65 56.65±0.86 71.33±0.62 60.03±0.98 75.63±0.79
DN4(CVPR-19) In. Conv-4 45.73±0.76 66.33±0.66 61.51±0.85 89.60±0.44 51.81±0.91 83.38±0.60
LRPABN(TMM-20) In. Conv-4 45.72±0.75 60.94±0.66 60.28±0.76 73.29±0.58 67.73±0.81 81.62±0.58
MattML(IJCAI-20) In. Conv-4 54.84±0.53 71.34±0.38 66.11±0.54 82.80±0.28 - -
ATL-Net(IJCAI-20) In. Conv-4 54.49±0.92 73.20±0.69 67.95±0.84 89.16±0.48 - -
FRN(CVPR-21) In. Conv-4 49.37±0.20 67.13±0.17 58.90±0.22 79.65±0.15 - -
LSC+SSM(ACM MM-21) In. Conv-4 55.53±0.45 71.68±0.36 70.13±0.48 84.29±0.31 75.60±0.49 87.21±0.29
Ours In. Conv-4 59.81±0.50 73.40±0.36 75.46±0.37 89.68±0.35 78.63±0.48 90.06±0.26
LSC+SSM(ACM MM-21) In. ResNet-12 64.15±0.49 78.28±0.32 77.03±0.46 88.85±0.46 83.76±0.44 92.61±0.23
Ours In. ResNet-12 65.92±0.49 80.65±0.36 79.40±0.43 92.26±0.15 84.51±0.41 93.11±0.19

Table 2. Performance on CUB

Method Setting backbone CUB
1-shot 5-shot
FEAT (CVPR-20) In. Conv-4 68.87±0.22 82.90±0.15
CTX (NIPS-20) In. Conv-4 69.64 87.31
FRN (CVPR-21) In. Conv-4 73.48 88.43
LSC+SSM (ACM MM-21) In. Conv-4 73.07±0.46 86.24±0.29
Ours In. Conv-4 79.34±0.45 91.01±0.24
DeepEMD (CVPR-20) In. ResNet-12 75.65±0.83 88.69±0.50
ICI (CVPR-20) In. ResNet-12 76.16 90.32
CTX (NIPS-20) In. ResNet-12 78.47 90.9
FRN (Baseline) In. ResNet-12 80.80±0.20 -
FRN (CVPR-21) In. ResNet-12 83.16 92.59
LSC+SSM (ACM MM-21) In. ResNet-12 77.77±0.44 89.87±0.24
Ours (Baseline) In. ResNet-12 72.61±0.47 85.60±0.29
Ours In. ResNet-12 81.66±0.30 91.83±0.17

Table 3. Performance on Aircraft

Method Setting backbone Aircaft
1-shot 5-shot
ProtoNet (NIPS-17) In. Conv-4 47.72 69.42
DSN (CVPR-20) In. Conv-4 49.63 66.36
CTX (NIPS-20) In. Conv-4 49.67 69.06
FRN (CVPR-21) In. Conv-4 53.2 71.17
Ours In. Conv-4 70.37±0.57 79.80±0.40
ProtoNet (NIPS-17) In. ResNet-12 66.57 82.37
DSN (CVPR-20) In. ResNet-12 68.16 81.85
CTX (NIPS-20) In. ResNet-12 65.6 80.2
FRN (CVPR-21) In. ResNet-12 70.17 83.81
Ours In. ResNet-12 74.01±0.54 83.11±0.41

Table 4. Performance on CUB → NABirds

Method backbone CUB→NABirds
1-shot 5-shot
LSC+SSM (Baseline) ResNet-12 45.70±0.45 63.84±0.40
LSC+SSM (ACM MM-21) ResNet-12 48.50±0.48 66.35±0.41
Ours (Baseline) Conv-4 43.55±0.45 55.53±0.42
Ours Conv-4 47.87±0.47 61.47±0.41
Ours (Baseline) ResNet-12 46.22±0.45 63.23±0.42
Ours ResNet-12 50.56±0.48 66.13±0.41

Feature Visualization

Contact

We have tried our best to verify the correctness of our released data, code and trained model weights. However, there are a large number of experiment settings, all of which have been extracted and reorganized from our original codebase. There may be some undetected bugs or errors in the current release. If you encounter any issues or have questions about using this code, please feel free to contact us via bo.zhangzx@gmail.com and jkyuan18@fudan.edu.cn.

References

[1] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In CVPR.

[2] Wenbin Li, Jinglin Xu, Jing Huo, Lei Wang, Yang Gao, and Jiebo Luo. 2019. Distribution consistency based covariance metric networks for few-shot learning. In AAAI.

[3] Wenbin Li, Lei Wang, Jinglin Xu, Jing Huo, Yang Gao, and Jiebo Luo. 2019. Revisiting local descriptor based image-to-class measure for few-shot learning. In CVPR.

[4] Yaohui Zhu, Chenlong Liu, and Shuqiang Jiang. 2020. Multi-attention meta learning for few-shot fine-grained image recognition. In IJCAI.

[5] Chuanqi Dong, Wenbin Li, Jing Huo, Zheng Gu, and Yang Gao. 2020. Learning Task-aware Local Representations for Few-shot Learning. In IJCAI

[6] Davis Wertheimer, Luming Tang, and Bharath Hariharan. 2021. Few-Shot Classification With Feature Map Reconstruction Networks. In CVPR.

[7] Yike Wu, Bo Zhang, Gang Yu, Weixi Zhang, Bin Wang, Tao Chen, and Jiayuan Fan. 2021. Object-aware Long-short-range Spatial Alignment for Few-Shot Fine-Grained Image Classification. In Proceedings of the 29th ACM International Conference on Multimedia. 107–115.

[8] Han-Jia Ye, Hexiang Hu, De-Chuan Zhan, and Fei Sha. 2020. Few-shot learning via embedding adaptation with set-to-set functions. In CVPR.

[9] Carl Doersch, Ankush Gupta, and Andrew Zisserman. 2020. Crosstransformers: spatially-aware few-shot transfer. In NeurIPS.

[10] Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In NeurIPS

[11] Chi Zhang, Yujun Cai, Guosheng Lin, and Chunhua Shen. 2020. DeepEMD: Few-Shot Image Classification With Differentiable Earth Mover’s Distance and Structured Classifiers. In CVPR.

[12] Yikai Wang, Chengming Xu, Chen Liu, Li Zhang, and Yanwei Fu. 2020. Instance credibility inference for few-shot learning. In CVPR

[13] Christian Simon, Piotr Koniusz, Richard Nock, and Mehrtash Harandi. 2020. Adaptive subspaces for few-shot learning. In CVPR.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages