Skip to content

Commit

Permalink
add download links, and use some bold text
Browse files Browse the repository at this point in the history
  • Loading branch information
shi-jian committed Mar 23, 2017
1 parent 75206e3 commit f1fb984
Show file tree
Hide file tree
Showing 3 changed files with 20 additions and 15 deletions.
5 changes: 5 additions & 0 deletions readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,8 @@ This is implementation for [Learning Non-Lambertian Object Intrinsics across Sha
You might be interested in the synthetic dataset we used in the paper. The entire dataset takes more than 1T for HDR images, and 240G for even compressed .jpg images. So it is hard to share it online, and we are still working on it;)

However, you can still check the [rendering scripts](render), which can generate the dataset and do even more things for your own need, e.g. depth and normal images. [Training and testing scripts](train) are implemented in [torch](http://torch.ch/).

#### Downloads
* [trained model](http://share.shijian.org.cn/shapenet/intrinsics/model.t7) you can try our trained model (450k iterations).
* [environment maps](http://share.shijian.org.cn/shapenet/intrinsics/envmap.zip) although you can find all envmaps [here](http://www.hdrlabs.com/sibl/archive.html), we prepared an archive file for you. Please note that the envmaps in the archive are down-sized.
* [mitsuba plugin](http://share.shijian.org.cn/shapenet/render/shapenet.dll) here is a compiled Windows DLL mitsuba plugin for loading and rendering ShapeNet .obj models.
12 changes: 6 additions & 6 deletions render/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,22 +18,22 @@ It might be useful to look into albedo/depth configuration file if you want to r

[gen_script.py](gen_script.py) is used to generate rendering and synthesize scripts. Please set following environment:

* MITSUBA
* **MITSUBA**
points to mitsuba renderer executable (e.g. mitsuba.exe in windows).

* SHAPENET_ROOT
* **SHAPENET_ROOT**
the directory contains extracted ShapeNet models.

* ENVMAP_ROOT
* **ENVMAP_ROOT**
the directory contains environment maps, with a 'list.txt' file. Each line of the list file contains an environment map filename.

* RENDER_ROOT
* **RENDER_ROOT**
the directory to put rendering scripts and results.


Recently ShapeNet released an official dataset separation. The script would automatically download the model [list](http://shapenet.cs.stanford.edu/shapenet/obj-zip/SHREC16/all.csv) from ShapeNet, which contains models, categories, uuid and data separation. Then it would generate output directories for models under RENDER_ROOT, as well as two scripts: render.bat/render.sh and synthesize.bat/synthesize.sh.

* render.bat: render albedo/shading/specular/depth in HDR images.
* synthesize.bat: generate mask image from depth, convert HDR to LDR for albedo/shading/specular(for saving disk space), generate image by I=A*S+R. [ImageMagick](http://www.imagemagick.org) is required for image synthesizing.
* **render.bat**: render albedo/shading/specular/depth in HDR images.
* **synthesize.bat**: generate mask image from depth, convert HDR to LDR for albedo/shading/specular(for saving disk space), generate image by I=A*S+R. [ImageMagick](http://www.imagemagick.org) is required for image synthesizing.

Then, you can run these scripts under their directory. We strongly recommend to render on a cluster. Rendering for a single model under 92 environment maps takes about 45 min on an i7-2600 old PC.
18 changes: 9 additions & 9 deletions train/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,20 +5,20 @@ This directory provides network structure, criterion, training and testing scrip
## Train

Some parameters:
* -data_root specify the root directory of ShapeNet rendering images.
* -model_list specify the .csv file containing model id and dataset separation downloaded from [ShapeNet](http://shapenet.cs.stanford.edu/shapenet/obj-zip/SHREC16/all.csv) website.
* -env_list specify the environment map list file.
* -outdir specify the output directory for saving snapshots. Default is current directory.
* **-data_root** specify the root directory of ShapeNet rendering images.
* **-model_list** specify the .csv file containing model id and dataset separation downloaded from [ShapeNet](http://shapenet.cs.stanford.edu/shapenet/obj-zip/SHREC16/all.csv) website.
* **-env_list** specify the environment map list file.
* **-outdir** specify the output directory for saving snapshots. Default is current directory.

## Test

Testing scripts is quite simple. It accepts 5 parameters.

* -input specify the input image file.
* -mask specify the mask file.
* -model specify the trained model file.
* -outdir specify the output directory, default is current directory.
* -gpu 0 is for running on CPU. Default is using GPU.
* **-input** specify the input image file.
* **-mask** specify the mask file.
* **-model** specify the trained model file.
* **-outdir** specify the output directory, default is current directory.
* **-gpu** 0 is for running on CPU. Default is using GPU.


The script would output 5 images including albedo.png, shading.png, specular.png, as well as input.png and mask.png under outdir.

0 comments on commit f1fb984

Please sign in to comment.