Skip to content
Trying to achieve the idea in paper IM2CAD; 3D reconstruction; not finished
Branch: master
Clone or download
Latest commit 4cb44b1 Mar 1, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore
BatchDatsetReader.py origin version of fcn Feb 3, 2018
FCN.py limit memory usage Feb 8, 2018
LICENSE Initial commit Jan 26, 2018
TensorflowUtils.py origin version of fcn Feb 3, 2018
__init__.py origin version of fcn Feb 3, 2018
read_Data.py modify parts of fcn Feb 5, 2018
readme.md update readme Mar 1, 2018

readme.md

IM2CAD

It's a repository trying to achieve the idea in paper IM2CAD. The main goal of this paper is to reconstruct a scene that is similar to the given photo of a room.

Datasets used in the paper

  • LSUN is needed in pixel-level labeling task to estimate the room geometry.

  • imagenet2012 dataset is used to detect the objects in the room(pre-trained model was used in the paper).

  • ShapeNet 3D models are the objects will appear in the reconstructed scene. (an account may needed to download data)

Main Process to achieve the result

Room geometry estimation

The lsun indoor dataset can be downloaded from the above link, or you can fork the official GitHub repository lsun and follow the instructions there.

The FCN is modified from the repo FCN.tensorflow. Note: The format of lsun indoor dataset is different a bit from the ADEChallengeData2016 dataset which is used in the origin repository.

To train the network, just running the following command:

python FCN.py --mode=train

It can also visualize the part of results by replacing the "train" with "visualize".

Object detection

According to the paper, the Faster-RCNN is used to detect the objects occured in the indoor scene.

You can’t perform that action at this time.