This is implementation for Learning Non-Lambertian Object Intrinsics across ShapeNet Categories
You might be interested in the synthetic dataset we used in the paper. The entire dataset takes more than 1T for HDR images, and 240G for even compressed .jpg images. So it is hard to share it online, and we are still working on it;)
However, you can still check the rendering scripts, which can generate the dataset and do even more things for your own need, e.g. depth and normal images. Training and testing scripts are implemented in torch.
- trained model you can try our trained model (450k iterations).
- environment maps although you can find all envmaps here, we prepared an archive file for you. Please note that the envmaps in the archive are down-sized.
- mitsuba plugin here is a compiled Windows DLL mitsuba plugin for loading and rendering ShapeNet .obj models.