New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supervised loss function #11
Comments
Hi, You might consider the purpose of Please refer to Eq.41 and Eq.42 in http://personal.psu.edu/drh20/genetics/lectures/11.pdf Thanks! |
Hey @luost26, Ah ok, fair enough. I wasn't exactly sure why the parameter was used but it makes more sense as a means to scale/standardize the loss. For the second point, great, you use the sample mean as an estimator for the expected value. Thank you for clearing my doubts and cheers again for the implementation! |
The In the implementation, the std of noise added to the point cloud ranges from 0.01 to 0.03. Therefore, |
Thank you for the further clarification on how the value was chosen. That's very helpful! |
Hi @luost26,
Thank you for sharing your implementation! I just have a question about the supervised (and self-supervised) loss function. In https://github.com/luost26/score-denoise/blob/main/models/denoise.py line 77, what is the purpose of self.dsm_sigma? I was not able to find this in the paper.
Furthermore, in equation 3 of the main paper, you take the expectation with respect to the distribution N(x_i). But in the code this is a straightforward average so is this a uniform distribution?
Thank you!
D.
The text was updated successfully, but these errors were encountered: