This is a Pytorch implementation of the paper "3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction" by Choy et al. Given one or multiple views of an object, the network generates voxelized ( a voxel is the 3D equivalent of a pixel) reconstruction of the object in 3D.
See chrischoy/3D-R2N2 for the original paper author's implementation in Theano, as well as overview of the method.
For now, only the non-residual LSTM-based architecture with neighboring recurrent unit connection is implemented. It is called 3D-LSTM-3 in the paper.
A pre-trained model based on this architecture can be downloaded from here. It obtains the following result on the ShapeNet rendered images test dataset:
IoU | Loss |
---|---|
0.591 | 0.093 |
The code was tested with Python 3.6.
- Download the repository
git clone https://github.com/alexgo1/pytorch-3d-r2n2.git
- Install the requirements
pip install -r requirements.txt
- Download and extract the ShapeNet rendered images dataset:
mkdir ShapeNet/
wget http://cvgl.stanford.edu/data2/ShapeNetRendering.tgz
wget http://cvgl.stanford.edu/data2/ShapeNetVox32.tgz
tar -xzf ShapeNetRendering.tgz -C ShapeNet/
tar -xzf ShapeNetVox32.tgz -C ShapeNet/
- Rename the
config.ini.example
config template file to e.gyour_config.ini
, and change parameters in it as required. - Run
python train.py --cfg=your_config.ini
. Or simplypython train.py
if you named your config fileconfig.ini
.
- Run
python test.py --cfg=your_config.ini
. Or simplypython test.py
if your config file is namedconfig.ini
.
This can be the same config file used for training the model. Note that when testing, you probably want to setresume_epoch
to the number of epochs that your model was trained for.
MIT License