Skip to content

rowanz/neural-motifs

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
lib
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

neural-motifs

Like this work, or scene understanding in general? You might be interested in checking out my brand new dataset VCR: Visual Commonsense Reasoning, at visualcommonsense.com!

This repository contains data and code for the paper Neural Motifs: Scene Graph Parsing with Global Context (CVPR 2018) For the project page (as well as links to the baseline checkpoints), check out rowanzellers.com/neuralmotifs. If the paper significantly inspires you, we request that you cite our work:

Bibtex

@inproceedings{zellers2018scenegraphs,
  title={Neural Motifs: Scene Graph Parsing with Global Context},
  author={Zellers, Rowan and Yatskar, Mark and Thomson, Sam and Choi, Yejin},
  booktitle = "Conference on Computer Vision and Pattern Recognition",  
  year={2018}
}

Setup

  1. Install python3.6 and pytorch 3. I recommend the Anaconda distribution. To install PyTorch if you haven't already, use conda install pytorch=0.3.0 torchvision=0.2.0 cuda90 -c pytorch.

  2. Update the config file with the dataset paths. Specifically:

    • Visual Genome (the VG_100K folder, image_data.json, VG-SGG.h5, and VG-SGG-dicts.json). See data/stanford_filtered/README.md for the steps I used to download these.
    • You'll also need to fix your PYTHONPATH: export PYTHONPATH=/home/rowan/code/scene-graph
  3. Compile everything. run make in the main directory: this compiles the Bilinear Interpolation operation for the RoIs as well as the Highway LSTM.

  4. Pretrain VG detection. The old version involved pretraining COCO as well, but we got rid of that for simplicity. Run ./scripts/pretrain_detector.sh Note: You might have to modify the learning rate and batch size, particularly if you don't have 3 Titan X GPUs (which is what I used). You can also download the pretrained detector checkpoint here.

  5. Train VG scene graph classification: run ./scripts/train_models_sgcls.sh 2 (will run on GPU 2). OR, download the MotifNet-cls checkpoint here: Motifnet-SGCls/PredCls.

  6. Refine for detection: run ./scripts/refine_for_detection.sh 2 or download the Motifnet-SGDet checkpoint.

  7. Evaluate: Refer to the scripts ./scripts/eval_models_sg[cls/det].sh.

help

Feel free to open an issue if you encounter trouble getting it to work!