Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

fix divergence_exact's diagonal sum #3

Merged
merged 1 commit into from Jan 18, 2021
Merged

fix divergence_exact's diagonal sum #3

merged 1 commit into from Jan 18, 2021

Conversation

ventusff
Copy link
Contributor

In run_nerf_helpers.py: L71 divergence_loss, the original code to calculate the diagonal of jacobian matrix is wrong.

I'm guessing since it's using divergence_approx function by default, this error does not affect the default training behavior.
But still it will affect the behavior when setting exact=true for training.

example:

jac[0,...]=
        [[  1.5425,   8.1350,   5.7477],
        [ -1.1500, -14.2937,  -9.5665],
        [ -0.5321,  11.8239,   5.3827]]

jac.view(jac.shape[0], -1)[0,...]=
         [  1.5425, 8.1350, 5.7477, -1.1500, -14.2937, -9.5665, -0.5321, 11.8239, 5.3827]

# wrong way
jac.view(jac.shape[0], -1)[0, :: jac.shape[1]]=
         [ 1.5425, -1.1500, -0.5321]

# correct way
jac.view(jac.shape[0], -1)[0, :: (jac.shape[1]+1)]=
         [  1.5425, -14.2937,   5.3827]

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 18, 2021
@edgar-tr
Copy link
Contributor

Thank you!

@edgar-tr edgar-tr merged commit 3e6aef8 into facebookresearch:master Jan 18, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants