Skip to content

lzqsd/TransparentShapeDatasetCreation

Repository files navigation

Transparent Shape Dataset Creation

This repository contains the code to create the transparent shape dataset in paper Through the Looking Glass: Neural 3D Reconstruction of Transparent Shapes, CVPR 2020. Please check our webpage for more information. Please consider citing our paper if you find this code useful. Part of the code is inherited from the 2 prior projects:

  • Li, Z., Xu, Z., Ramamoorthi, R., Sunkavalli, K., & Chandraker, M. (2018). Learning to reconstruct shape and spatially-varying reflectance from a single image. ACM Transactions on Graphics (TOG), 37(6), 1-11.
  • Xu, Z., Sunkavalli, K., Hadap, S., & Ramamoorthi, R. (2018). Deep image-based relighting from optimal sparse samples. ACM Transactions on Graphics (TOG), 37(4), 1-13.

Overview

We create transparent shape dataset by procedurally combining shape primitives to create complex sceenes. An overview of our dataset creation pipeline is shown below. Please refer to our paper for more details.

Prerequsites

In order to run the code, you will need:

  • Laval Indoor scene dataset: Please download the dataset from this link. We use 1499 environment map for training and 645 environment map for testing. Please turn the .exr files into .hdr files, since our renderer does not support loading .exr files yet. Please save the training set and testing set in Envmap/train and Envmap/test separately.
  • Optix Renderer: Please download our Optix-based renderer from this link. There is an Optix renderer included in this repository. But it is the renderer specifically modified to render the two-bounce normal. Please use the renderer from the link to render images. We will refer to the renderer in this repository as renderer-twobounce and the renderer from the link as renderer-general in the following to avoid confusion.
  • Colmap: Please install Colmap from this link. We use Colmap to reconstruct mesh from point cloud.
  • Meshlab: Please install Meshlab from this link. We use the subdivision algorithm in Meshlab to smooth the surface so that there is no artifacts when rendering transparent shape. This is important when the BRDF is a delta function.

Instructions

We will first go through the process of creating training set for 10 views reconstruction. The instructions to create 5-view and 20-view datasets will be given below.

  1. Compile the renderer_twobounce in this OptixRenderer of this repository for rendering two-bounce normal and depth. The steps are exactly the same as compiling renderer_general.
  2. python createShape.py --mode train --rs 0 --re 3000
  • Create 3000 randomly generated scene as the training set. The data will be stored under the directory ./Shapes
  1. python createRenderFilesForDepths.py --mode train --rs 0 --re 3000
  • Create the camera poses and the xml files for rendering depth maps. For each shape, it will uniformly sample 75 poses surronding the shape.
  1. python renderAndIntegrate.py --mode train --rs 0 --re 3000 --renderProgram ABSOLUTE_PATH_TO_renderer_general
  • For each shape, we render 75 depth maps from different views and fuse the depth map together to generate a mesh. After that, we use subdivision to smooth the generated surface. The purpose of this step is to remove the inner surface and keep only the outer surface.
  1. python createCamera10.py --camNum 10 --mode train --rs 0 --re 3000
  • Create the camera poses for the 10 views reconstruction.
  1. python createRenderFilesRegularized.py --mode train --rs 0 --re 3000 --envRoot ABSOLUTE_PATH_TO_Envmap_DIRECTORY
  • Create render files for rendering images.
  1. python renderImage.py --camNum 10 --mode train --rs 0 --re 3000 --renderProgram ABSOLUTE_PATH_TO_RENDERER_GENERAL
  • Render images for the training set. We will render 10 images for each shape.
  1. python renderTwoBounce.py --camNum 10 --mode train --rs 0 --re 3000
  • Render the ground-truth two-bounce normals and depths.
  1. python createVisualHull.py --camNum 10 --mode train --rs 0 --re 3000
  • Create 10 views visual hull for each shape. The masks used for visual hull generation is generated by renderTwoBounce.py
  1. python createRenderFileForVH.py --camNum 10 --mode train --rs 0 --re 3000
  • Create the xml files for rendering visual hull.
  1. python renderTwoBounceVisualHull.py --camNum 10 --mode train --rs 0 --re 3000
  • Render the ground-truth two-bounce normals and depths for visual hull geometry.

Creating testing set

To create the testing set, please set --mode to test and --re to 600 and rerun steps from 2 to 11 again.

Creating 5-view and 20-view dataset

To create the dataset for 5-view and 20-view reconstruction, at step 5, run createCamera5.py and createCamera20.py respectively. Run steps from 6 to 11 again but change --camNum to 5 and 20 respectively.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published