Skip to content

Official Code of "GeReA: Question-Aware Prompt Captions for Knowledge-based Visual Question Answering"

Notifications You must be signed in to change notification settings

Upper9527/GeReA

Repository files navigation

Official Code of "GeReA: Question-Aware Prompt Captions for Knowledge-based Visual Question Answering"

GeReA

GeReA is a method for knowledge-based VQA task, as described in (link to paper).

Knowledge-based visual question answering (VQA) requires world knowledge beyond the image for accurate answer. Recently, instead of extra knowledge bases, a large language model (LLM) like GPT-3 is activated as an implicit knowledge engine to jointly acquire and reason the necessary knowledge for answering by converting images into textual information (e.g., captions and answer candidates). However, such conversion may introduce irrelevant information, which causes the LLM to misinterpret images and ignore visual details crucial for accurate knowledge. We argue that multimodal large language model (MLLM) is a better implicit knowledge engine than the LLM for its superior capability of visual understanding. Despite this, how to activate the capacity of MLLM as the implicit knowledge engine has not been explored yet. Therefore, we propose GeReA, a generate-reason framework that prompts a MLLM like InstructBLIP with question relevant vision and language information to generate knowledge-relevant descriptions and reasons those descriptions for knowledge-based VQA. Specifically, the question-relevant image regions and question-specific manual prompts are encoded in the MLLM to generate the knowledge relevant descriptions, referred to as question-aware prompt captions. After that, the question-aware prompt captions, image-question pair, and similar samples are sent into the multi-modal reasoning model to learn a joint knowledge-image-question representation for answer prediction. GeReA unlocks the use of MLLM as the implicit knowledge engine, surpassing all previous state-of-the-art methods on OK-VQA and A-OKVQA datasets, with test accuracies of 66.5% and 63.3% respectively.

Many thanks for your attention to our work!!!

If you find our project is helpful for your research, please kindly give us a 🌟 and cite our paper 📑 :)

Citation

@article{ma2024gerea,
  title={GeReA: Question-Aware Prompt Captions for Knowledge-based Visual Question Answering},
  author={Ma, Ziyu and Li, Shutao and Sun, Bin and Cai, Jianfei and Long, Zuxiang and Ma, Fuyan},
  journal={arXiv preprint arXiv:2402.02503},
  year={2024}
}

Getting Started

Installation

To establish the environment, just run this code in the shell:

git clone https://github.com/Upper9527/GeReA.git
cd GeReA
conda env create -f requirements.yaml
conda activate gerea

That will create the environment gerea we used.

Download data

We provide the pre-processed data, i.e., captions generated by InstructBLIP and LLaVA-1.5, similar samples, and visual features. You can find it in the okvqa_dataset directory. For the visual features, please download the following link:

pip install gdown
gdown https://drive.google.com/uc?id=1K_HFM781uuuj5VL2kXyCHczlQ9fcTSNy
gdown https://drive.google.com/uc?id=1MdQMW2MATusrmdjZ9yzEFKE8LhsSJJkc

It contains two .npy data, i.e., detr_encoded_train2014_dic.npy and detr_encoded_val2014_dic.npy. Please add the two files into the okvqa_dataset directory.

GeReA
├── ...
├── okvqa_dataset
│   ├── detr_encoded_train2014_dic.npy
│   ├── detr_encoded_val2014_dic.npy
└── ...

Pre-trained model

Model Description Accuracy(%) Weight Log
GeReA (Single) InstructBLIP+LLaVA-1.5 65.4 model.zip(coming soon) run.log

As for model ensembling, you can train three models with different seeds, and for each sample, you can get the final result with the highest occurence frequency among the three models' predictions, please refer to ensemble.py.

Prediction results

The prediction results of "single" and "ensemble" versions are shared:

Model Accuracy(%) Download
GeReA (Single) 65.4 prediction65.4.json
GeReA (Ensemble) 66.5 prediction66.5.json

Train the model

Run the following command to start training (the 2*A100 40G training example):

NGPU=2 python -m torch.distributed.launch --nproc_per_node=2 --master_port 10845 train.py \
	--use_checkpoint \
	--lr 5e-5 \
	--model_size large \
	--num_workers 8 \
	--optim adamw \
	--scheduler linear \
	--weight_decay 0.01 \
	--save_freq 10000 \
	--eval_freq 10000 \
	--print_freq 100 \
	--text_maxlength 400 \
	--seed 833 \
	--name exp \
	--checkpoint_dir ./checkpoints_InstructBLIP+LLaVA-1.5 \
	--per_gpu_batch_size 1 \
	--total_step 20000 \
	--warmup_step 1000 

The whole training time is about 48 hours with 2 X A100 (40G) GPUs.

Test the trained model

Run the following command to start evaluation:

CUDA_VISIBLE_DEVICES=2 python test.py --eval_data processed_data/test.pkl \
	--model_size large \
	--per_gpu_batch_size 1 \
	--num_workers 8 \
	--text_maxlength 300 \
	--checkpoint_dir ./checkpoint/ \
	--seed 833 \
	--name eval \
	--model_path checkpoints_two_card_150/exp/checkpoint/best_dev/ \
    --write_results

It will not only output the final accuracy, but also generate the final results as "prediction.json" under the defined checkpoint directory path.

Test with json file

If your prediction json file is named as: "prediction.json".

Run the following command to start evaluation with json files:

python leaderboard_evaluation.py --pred_path prediction.json \
          --gt_path eval/mscoco_val2014_annotations.json

Experimental Results

Comparison with previous methods

comparison

Example visualization

visualization

Contact

If I cannot timely respond to your questions, you can send the email to maziyu@hnu.edu.cn.

Acknowledgements

Our code is built on FiD which is under the LICENSE.

About

Official Code of "GeReA: Question-Aware Prompt Captions for Knowledge-based Visual Question Answering"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published