Skip to content

A Diagnostic Dataset for Quantificational Language and Elementary Visual Reasoning

Notifications You must be signed in to change notification settings

zechenli03/QLEVR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

QLEVR Dataset Generation

This is the code used to generate the QLEVR dataset as described in the paper:

QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary Visual Reasoning

Presented at Findings of NAACL 2022

The QLEVR dataset can be found here.

Abstract:
Synthetic datasets have successfully been used to probe visual question-answering datasets for their reasoning abilities. CLEVR, for example, tests a range of visual reasoning abilities. The questions in CLEVR focus on comparisons of shapes, colors, and sizes, numerical reasoning, and existence claims. This paper introduces a minimally biased, diagnostic visual question-answering dataset, QLEVR, that goes beyond existential and numerical quantification and focus on more complex quantifiers and their combinations, e.g., asking whether there are more than two red balls that are smaller than at least three blue balls in an image. We describe how the dataset was created and present a first evaluation of state-of-the-art visual question-answering models, showing that QLEVR presents a formidable challenge to our current models.

You can use this code to generate 2D images, render synthetic 3D images and generate questions and answers for those images, like this:

If you find this code useful, please cite as

@inproceedings{li2022qlevr,
  title={QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary Visual Reasoning},
  author={Li, Zechen and Søgaard, Anders},
  booktitle={Findings of NAACL},
  year={2022}
}

Step 1: Generating 2D Images

First, we construct a scene graph for a two-dimensional image containing areas and objects of different sizes and shapes. You can generate some 2D images with following script:

python gen_2d.py --split 'train' --start_idx 0 --num_images 100 --output_dir output/2d_scene/

After this command terminates you should have 100 images stored in output/2d_scene/train/full_images like this:


The file output/2d_scene/train/scenes_2d_train.json will contain ground-truth locations, bounding boxes, attributes and relationships for the planes and objects for these images.

Step 2: Generating 3D Images

Second, we render synthetic 3D images using Blender 2.93.

Here is a tutorial about running command-line Blender renders on Google Colab for fast rendering. You can check my code about rendering using Colab here.

Then you should have some rendered images stored in output/3d_scene/train like these:


The file output/3d_scene/train/scenes_3d_train.json will contain rendering settings, ground-truth locations, bounding boxes, attributes and relationships for the planes and objects for these images.

Step 3: Generating Questions

Then, we generate questions and answers for the rendered images generated in the previous steps. This step takes the JSON file scenes_3d_train.json containing all ground-truth scene information as input, and outputs a JSON file questions_train.json containing questions, answers, and operators for the questions, and outputs a JSON file storage_train.json that records the distribution of question templates used in real time.

cd question_generation
python gen_qa.py --input_scene_file 'output/3d_scene/train/scenes_3d_train.json \' 
                 --output_questions_file 'output/3d_scene/train/questions_train.json' \
                 --output_storage_file 'output/3d_scene/train/storage_train.json' \
                 --save_times 200 --num_retries 1000 

About

A Diagnostic Dataset for Quantificational Language and Elementary Visual Reasoning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published