Skip to content

Latest commit

 

History

History
104 lines (79 loc) · 3.35 KB

EVAL_README.md

File metadata and controls

104 lines (79 loc) · 3.35 KB

Evaluation Instruction for MiniGPT-v2

Data preparation

Images download

Image source Download path
OKVQA annotations    images
gqa annotations    images
hateful meme images and annotations
iconqa images and annotation
vizwiz images and annotation
RefCOCO annotations
RefCOCO+ annotations
RefCOCOg annotations

Evaluation dataset structure

${MINIGPTv2_EVALUATION_DATASET}
├── gqa
│   └── test_balanced_questions.json
│   ├── testdev_balanced_questions.json
│   ├── gqa_images
├── hateful_meme
│   └── hm_images
│   ├── dev.jsonl
├── iconvqa
│   └── iconvqa_images
│   ├── choose_text_val.json
├── vizwiz
│   └── vizwiz_images
│   ├── val.json
├── vsr
│   └── vsr_images
├── okvqa
│   ├── okvqa_test_split.json
│   ├── mscoco_val2014_annotations_clean.json
│   ├── OpenEnded_mscoco_val2014_questions_clean.json
├── refcoco
│   └── instances.json
│   ├── refs(google).p
│   ├── refs(unc).p
├── refcoco+
│   └── instances.json
│   ├── refs(unc).p
├── refercocog
│   └── instances.json
│   ├── refs(google).p
│   ├── refs(und).p
...

environment setup

export PYTHONPATH=$PYTHONPATH:/path/to/directory/of/MiniGPT-4

config file setup

Set llama_model to the path of LLaMA model.
Set ckpt to the path of our pretrained model.
Set eval_file_path to the path of the annotation files for each evaluation data.
Set img_path to the img_path for each evaluation dataset.
Set save_path to the save_path for evch evaluation dataset.

in minigpt4/eval_configs/minigptv2_benchmark_evaluation.yaml

start evalauting RefCOCO, RefCOCO+, RefCOCOg

port=port_number
cfg_path=/path/to/eval_configs/minigptv2_benchmark_evaluation.yaml

dataset names:

refcoco refcoco+ refcocog
torchrun --master-port ${port} --nproc_per_node 1 eval_ref.py \
 --cfg-path ${cfg_path} --dataset refcoco,refcoco+,refcocog --resample

start evaluating visual question answering

port=port_number
cfg_path=/path/to/eval_configs/minigptv2_benchmark_evaluation.yaml

dataset names:

okvqa vizwiz iconvqa gqa vsr hm
torchrun --master-port ${port} --nproc_per_node 1 eval_vqa.py \
 --cfg-path ${cfg_path} --dataset okvqa,vizwiz,iconvqa,gqa,vsr,hm