You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. My colleague (@czw0078) and I have been using your neural renderer, and we've noticed some pretty strange behavior during optimization. A minimal working example demonstrating the odd behavior can be found below.
Current z_delta: 4.0
Loss: 21549.5859375
z_delta derivative after .backward(): -68343.7109375
Current z_delta: 4.0
Loss: -21549.5859375
z_delta derivative after .backward(): -13344.5039062
when using the image below as a starting point.
As you can see, the gradients have the same sign despite using opposite loss functions. Any insight you could provide on this behavior would be greatly appreciated. Thank you.
The following Dockerfile was used to generate the environment used for the code above.
Hi. My colleague (@czw0078) and I have been using your neural renderer, and we've noticed some pretty strange behavior during optimization. A minimal working example demonstrating the odd behavior can be found below.
The example code produces the following output:
when using the image below as a starting point.
![my_teapot](https://user-images.githubusercontent.com/1927770/44936836-f0d58180-ad3b-11e8-90c4-6026d665afc0.png)
As you can see, the gradients have the same sign despite using opposite loss functions. Any insight you could provide on this behavior would be greatly appreciated. Thank you.
The following
Dockerfile
was used to generate the environment used for the code above.The text was updated successfully, but these errors were encountered: