Skip to content

CrossmodalGroup/NAAF

Repository files navigation

Introduction

This is Negative-Aware Attention Framework for Image-Text Matching, source code of NAAF. The paper is accepted by CVPR2022. Download Paper.

Its Chinese blog can be found here. It is built on top of the SCAN in PyTorch.

Our series of work based on optimal discriminative learning is published in IEEE TMM, which is 'Unified Adaptive Relevance Distinguishable Attention Network for Image-Text Matching'. The paper can be downloaded here.

image

Requirements and Installation

We recommended the following dependencies.

Pretrained model

If you don't want to train from scratch, you can download the pretrained NAAF model from here(for Flickr30K model) and here(for Flickr30K model without using GloVe). The performance of this pretrained single model is as follows, in which some Recall@1 values are even better than results produced by our paper:

rsum: 507.9
Average i2t Recall: 91.3
Image to text: 80.6 95.4 98.0 1.0 2.0
Average t2i Recall: 78.0
Text to image: 60.0 83.9 89.9 1.0 7.4

Download data

Download the dataset files. We use the image feature created by SCAN. The vocabulary required by GloVe has been placed in the 'vocab' folder of the project (for Flickr30K and MSCOCO).

You can download the dataset through Baidu Cloud. Download links are Flickr30K and MSCOCO, the extraction code is: USTC.

Performance

We provide our NAAF model performance (single or ensemble) under different text backbones, where readers can choose the appropriate performance for a fair comparison:

image image

Training

python train.py --data_path "$DATA_PATH" --data_name f30k_precomp --vocab_path "$VOCAB_PATH" --logger_name runs/log --logg_path runs/runX/logs --model_name "$MODEL_PATH" 

Arguments used to train Flickr30K models and MSCOCO models are similar with those of SCAN:

For Flickr30K:

Method Arguments
NAAF --lambda_softmax=20 --num_epoches=20 --lr_update=10 --learning_rate=.0005 --embed_size=1024 --batch_size=128

For MSCOCO:

Method Arguments
NAAF --lambda_softmax=20 --num_epoches=20 --lr_update=10 --learning_rate=.0005 --embed_size=1024 --batch_size=256

Evaluation

Test on Flickr30K

python test.py

To do cross-validation on MSCOCO, pass fold5=True with a model trained using --data_name coco_precomp.

python testall.py

To ensemble model, specify the model_path in test_stack.py, and run

python test_stack.py

Reference

If you found this code useful, please cite the following papers:

@inproceedings{zhang2022negative,
  title={Negative-Aware Attention Framework for Image-Text Matching},
  author={Zhang, Kun and Mao, Zhendong and Wang, Quan and Zhang, Yongdong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={15661--15670},
  year={2022}
}
@article{zhang2022unified,
  title={Unified adaptive relevance distinguishable attention network for image-text matching},
  author={Zhang, Kun and Mao, Zhendong and Liu, An-An and Zhang, Yongdong},
  journal={IEEE Transactions on Multimedia},
  volume={25},
  pages={1320--1332},
  year={2022},
  publisher={IEEE}
}

About

Implementation of our CVPR2022 paper, Negative-Aware Attention Framework for Image-Text Matching.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages