Skip to content
Code for ECCV 2018 paper - Holistic 3D Scene Parsing and Reconstruction from a Single RGB Image
Branch: master
Clone or download
Latest commit b4a73b2 Aug 11, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
doc
osmesa
utils
.gitignore
LICENSE
README.md
camera.py
config.py
inference.py
mcmc.py
metadata_fine.csv
object_loader.py
pg.py
requirements.txt
sample_human.py
visualize.py

README.md

Holistic 3D Scene Understanding and Reconstruction from a Single RGB Image

Created by Siyuan Huang, Siyuan Qi, Yixin Zhu, Yinxue Xiao, Yuanlu Xu, and Song-Chun Zhu from UCLA

teaser

Introduction

This repository contains the code for our ECCV 2018 paper (https://arxiv.org/abs/1808.02201).

In this work, we propose a computational framework to jointly parse a single RGB image and reconstruct a holistic 3D configuration composed by a set of CAD models using a stochastic grammar model. We solve this joint parsing and reconstruction problem in an analysis-by-synthesis fashion, seeking to minimize the differences between the input image and the rendered images generated by our 3D representation, over the space of depth, surface normal, and object segmentation map. The optimal configuration, represented by a parse graph, is inferred using Markov chain Monte Carlo.

Please refer to our paper or project for more details.

Citation

If you find our work inspiring or our code helpful in your research, please consider citing:

@inProceedings{huang2018holistic, 
  title={Holistic 3D Scene Parsing and Reconstruction from a Single RGB Image},
  author = {Huang, Siyuan and Qi, Siyuan and Zhu, Yixin and Xiao, Yinxue and Xu, Yuanlu and Zhu, Song-Chun},
  booktitle={ECCV},
  year={2018}
}

Install

  1. pip install -r requirments.txt

  2. Install osmesa (offscreen rendering) and set up the environment following the instruction at osmesa/osmesa.md

Data

  1. Download initialized parse graph of 1,000 images and learned prior knowledge from here , extract it with

     tar -vzxf data.tar.gz       
    
  • Note we simplified the original CAD model in ShapeNetSem since the surface normal of some original models are flipped, this may hurt the performance a little bit.

Inference

  1. Joint inference of 3D room layout and 3D objects with image id, for example, 11.

     python inference.py -lo 11
    
  • Inference takes about one hour for each image, we are working on accelerating the rendering process.
  1. Infer latent human context and then joint infer layout, objects and human. Before that, you need to infer 3D room layout and 3D objects first or download our generated results.

     python inference.py -human 241
    
  • Note we only infer the latent human context in the subset of office due to limited data in 3D human-object interaction.
  1. Sample possible human skeleton for a scene.

     python sample_human 241
    

License

Our code is released under the MIT license.

You can’t perform that action at this time.