Differentiable Ray Sampling for Neural 3D Representation
The code realizes a differentiable renderer for neural 3D representation. The prediction model can be trained via only 2D images and conducts 3D reconstruction from a single RGB image (as below).
For more details, please refer to slides and blog (Japanese).
Please check out the jupyter notebook 1 which shows the qualititive results using the trained models in the case of car class.
Training and Evaluating
There must be -
- Creating the dataset of 10 rendered images of each car from ShapeNet V1 using blender.
(Please refer to the paper and codes of Tulsiani+ CVPR 2017)
python3 DRS/Main.py 0
to train networks and save them into DRS/save/.
- You can use jupyter notebook 1 to represent the qualititive results of 3D reconstruction.
- jupyter notebook 2 will conduct calculation of mean voxel IoU for the test case.
MIT License (see the LICENSE file for details).