Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hello, I would like to inquire about the synthetic datasets. #15

Closed
zuiqingtian opened this issue Dec 2, 2020 · 5 comments
Closed

Hello, I would like to inquire about the synthetic datasets. #15

zuiqingtian opened this issue Dec 2, 2020 · 5 comments

Comments

@zuiqingtian
Copy link

Hello, I want to ask, can I reproduce the rendering scene described in the paper exactly the same?

Because I noticed that for the objects in the synthetic data set in the paper, the BRDFS in each view randomly selects 2 of the 100 BRDFs.And the finally obtained image is under 64 illumination directions, which are also randomly sampled.

So does this mean that the original rendering scenes of these two synthetic datasets are no longer reproducible?
Thank you for your reply!

@zuiqingtian
Copy link
Author

Sorry, in addition, the merl dataset at the website https://www.merl.com/brdf/ is no longer available for download. Can you provide merl data set download link? Thank you very much for your reply!

@guanyingc
Copy link
Owner

Hi,

Although I set the random seed to 0 during rendering, it might not be possible to render exactly the same dataset might not be possible.

Please see https://drive.google.com/file/d/19fUBvRHWklg_vHLT8wuhL36ZmtYc7jOx/view?usp=sharing for a copy of MERL dataset.

A sample .xml file can be found in the following
`

<integrator type="multichannel">
    <integrator type="field">
        <string name ="field" value="shNormal" />
        <spectrum name="undefined" value="0" />
        <!--Occlusion-->
    </integrator>
</integrator>

<shape type="sphere" id="obj1">
    <point name="center" x="$tx" y="$ty" z="$tz" />
    <float name="radius" value="$scale" />
</shape>

<sensor type="orthographic">
    <float name="farClip" value="55.864"/>
	<float name="nearClip" value="0.10864"/>
	<transform name="toWorld">
        <lookat target="0, 0, -14" origin="0, 0, -15" up="0, 1, 0"/>
	</transform>

	<sampler type="ldsampler">
		<integer name="sampleCount" value="64"/>
	</sampler>

	<film type="hdrfilm">
		<integer name="height" value="$h"/>
		<integer name="width" value="$w"/>

        <string  name="pixelFormat"  value="rgb"/>
        <string  name="channelNames" value="normal"/>
		<rfilter type="gaussian"/>
        <boolean name="banner" value="false"/>
	</film>
</sensor>
`

@zuiqingtian
Copy link
Author

Thank you very much for your reply. your guidance is very helpful to me.

So I combine this XML sample file with blobby dataset, merl dataset, sculpture dataset and MITSUBA renderer. Can I render the Synthetic datasets described in this article?

Because I noticed that the amount of data is very large, I downloaded it and unzipped it and it took me half a day, hhh.it may be troublesome to render manually.

So I'm sorry to ask if there is a faster way to get this synthetic data set? The data set does not need to be as large as described in the paper, it is just a relatively small data set.

Thank you very much again for your reply!

@guanyingc
Copy link
Owner

Oh, you might be using a remote internet disk. In my local hard disk, it takes around one hour to unzip the data.
I think you can just use a subset of the dataset. It might not be easy to render the dataset by yourself within a short time .

@zuiqingtian
Copy link
Author

Thank you for your reply, your suggestion is very helpful to me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants