Download links and PyTorch implementation of "Towers of Babel: Combining Images, Language, and 3D Geometry for Learning Multimodal Vision", ICCV 2021.
Towers of Babel: Combining Images, Language, and 3D Geometry for Learning Multimodal Vision
Xiaoshi Wu, Hadar Averbuch-Elor, Jin Sun, Noah Snavely ICCV 2021
-
Image and Textual Descriptions: WikiScenes contains 63K images with captions of 99 cathedrals. We provide two versions for download:
- Low-res version used in our experiments (width set to 200[px], aspect ratio fixed): (1.9GB .zip file)
- Higher-res version (longer dimension set to 1200[px], aspect ratio fixed): (16.4GB .zip file)
Licenses for the images are provided here: (LicenseInfo.json file)
Data Structure
WikiScenes is organized recursively, following the tree structure in Wikimedia. Each semantic category (e.g. cathedral) contains the following recursive structure:
----0 (e.g., "milano cathedral duomo milan milano italy italia") --------0 (e.g., "Exterior of the Duomo (Milan)") ----------------0 (e.g., "Duomo (Milan) in art - exterior") ----------------1 ----------------... ----------------K0-0 ----------------category.json ----------------pictures (contains all pictures in current hierarchy level) --------1 --------... --------K0 --------category.json --------pictures (contains all pictures in current hierarchy level) ----1 ----2 ----... ----N ----category.json
category.json is a dictionary of the following format:
{ "max_index": SUB-DIR-NUMBER "pairs" : { CATEGORY-NAME: SUB-DIR-NAME } "pictures" : { PICTURE-NAME: { "caption": CAPTION-DATA, "url": URL-DATA, "properties": PROPERTIES } } }
where:
- SUB-DIR-NUMBER is the total number of subcategories
- CATEGORY-NAME is the name of the category (e.g., "milano cathedral duomo milan milano italy italia")
- SUB-DIR-NAME is the name of the sub-folder (e.g., "0")
- PICTURE-NAME is the name of the jpg file located within the pictures folder
- CAPTION-DATA contains the caption and URL contains the url from which the image was scraped.
- PROPERTIES is a list of properties pre-computed for the image-caption pair (e.g. estimated language of caption).
-
Keypoint correspondences: We also provide keypoint correspondences between pixels of images from the same landmark: (982MB .zip file)
Data Structure
{ "image_id" : { "kp_id": (x, y), } }
where:
- image_id is the id of each image.
- kp_id is the id of keypoints, which is unique across the whole dataset.
- (x, y) the location of the keypoint in this image.
-
COLMAP reconstructions: We provide the full 3D models used for computing keypoint correspondences: (1GB .zip file)
To view these models, download and install COLMAP. The reconstructions are organized by landmarks. Each landmark folder contains all the reconstructions associated with that landmark. Each reconstruction contains 3 files:
- points3d.txt that contains one line of data for each 3D point associated with the reconstruction. The format for each point is: POINT3D_ID, X, Y, Z, R, G, B, ERROR, TRACK[] as (IMAGE_ID, POINT2D_IDX).
- images.txt that contains two lines of data for each image associated with the reconstruction. The format of the first line is: IMAGE_ID, QW, QX, QY, QZ, TX, TY, TZ, CAMERA_ID, NAME. The format of the second line is: POINTS2D[] as (X, Y, POINT3D_ID)
- cameras.txt that contains one line of data for each camera associated with the reconstruction according to the following format: CAMERA_ID, MODEL, WIDTH, HEIGHT, PARAMS[]
Please refer to COLMAP's tutorial for further instructions on how to view these reconstructions.
-
Companion datasets for additional landmark categories: We provide download links for additional category types:
Synagogues
Images and captions (4.1GB .zip file), correspondences (47MB json file), license file (2.1MB json file)
Mosques
-
Minimum requirements. This project was originally developed with Python 3.6, PyTorch 1.0 and CUDA 9.0. The training requires at least one Titan X GPU (12Gb memory) .
-
Setup your Python environment. Clone the repository and install the dependencies:
conda create -n <environment_name> --file requirements.txt -c conda-forge/label/cf202003 conda activate <environment_name> conda install scikit-learn=0.21 pip install opencv-python
-
Download the dataset. Download the data as detailed above, unzip and place as follows: Image and textual descriptions in
<project>/data/
and the correspondence file in<project>
. -
Download pre-trained models. Download the initial weights (pre-trained on ImageNet) for the backbone model and place in
<project>/models/weights/
.Backbone Initial Weights Comments ResNet50 resnet50-19c8e357.pth PyTorch official model -
Train on the WikiScenes dataset. See instructions below. Note that the first run always takes longer for pre-processing. Some computations are cached afterwards.
The directory launch
contains template bash scripts for training, inference and evaluation.
Training. For each run, you need to specify the names of two variables, bash EXP
and bash RUN_ID
.
Running bash EXP=wiki RUN_ID=v01 ./launch/run_wikiscenes_resnet50.sh
will create a directory ./logs/wikiscenes_corr/wiki/
with tensorboard events and saved snapshots in ./snapshots/wikiscenes_corr/wiki/v01
.
Inference.
If you want to do inference with our pre-trained model, please make a directory and put the model there.
mkdir -p ./snapshots/wikiscenes_corr/final/ours
Download our validation set, and unzip it.
unzip val_seg.zip
run sh ./launch/infer_val_wikiscenes.sh
to predict masks. You can find the predicted masks in ./logs/masks
.
If you want to evaluate you own models, you will also need to specify:
EXP
andRUN_ID
you used for training;OUTPUT_DIR
the path where to save the masks;SNAPSHOT
specifies the model suffix in the formate000Xs0.000
;
Evaluation. To compute IoU of the masks, run sh ./launch/eval_seg.sh
.
For testing, we provide our pre-trained ResNet50 model:
Backbone | Link |
---|---|
ResNet50 | model_enc_e024Xs-0.800.pth (157M) |
We provide a datasheet for our dataset here.
The images in our dataset are provided by Wikimedia Commons under various free licenses. These licenses permit the use, study, derivation, and redistribution of these images—sometimes with restrictions, e.g. requiring attribution and with copyleft. We provide full license text and attribution for all images, make no modifications to any, and release these images under their original licenses. The associated captions are provided as a part of unstructured text in Wikimedia Commons, with rights to the original writers under the CC BY-SA 3.0 license. We modify these (as specified in our paper) and release such derivatives under the same license. We provide the rest of our dataset under a CC BY-NC-SA 4.0 license.
@inproceedings{Wu2021Towers,
title={{Towers of Babel}: {C}ombining Images, Language, and {3D} Geometry for Learning Multimodal Vision},
author={Wu, Xiaoshi and Averbuch-Elor, Hadar and Sun, Jin and Snavely, Noah},
booktitle={ICCV},
year={2021}
}
Our code is based on the implementation of Single-Stage Semantic Segmentation from Image Labels