Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supervised loss function #11

Closed
ddsediri opened this issue Jul 15, 2022 · 4 comments
Closed

Supervised loss function #11

ddsediri opened this issue Jul 15, 2022 · 4 comments

Comments

@ddsediri
Copy link

ddsediri commented Jul 15, 2022

Hi @luost26,

Thank you for sharing your implementation! I just have a question about the supervised (and self-supervised) loss function. In https://github.com/luost26/score-denoise/blob/main/models/denoise.py line 77, what is the purpose of self.dsm_sigma? I was not able to find this in the paper.

Furthermore, in equation 3 of the main paper, you take the expectation with respect to the distribution N(x_i). But in the code this is a straightforward average so is this a uniform distribution?
score-based-denoising-question

Thank you!
D.

@luost26
Copy link
Owner

luost26 commented Jul 17, 2022

Hi,

You might consider the purpose of self.dsm_sigma as scaling the loss function to improve training.

Please refer to Eq.41 and Eq.42 in http://personal.psu.edu/drh20/genetics/lectures/11.pdf
It explains why we can take a straightforward average.

Thanks!

@ddsediri
Copy link
Author

Hey @luost26,

Ah ok, fair enough. I wasn't exactly sure why the parameter was used but it makes more sense as a means to scale/standardize the loss.

For the second point, great, you use the sample mean as an estimator for the expected value.

Thank you for clearing my doubts and cheers again for the implementation!
D.

@luost26
Copy link
Owner

luost26 commented Jul 17, 2022

The dsm_sigma is chosen according to the standard deviation of the noise during training.

In the implementation, the std of noise added to the point cloud ranges from 0.01 to 0.03. Therefore, dsm_sigma is set to 0.01 such that the loss and the gradient are not too small.

@ddsediri
Copy link
Author

Thank you for the further clarification on how the value was chosen. That's very helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants