-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Final Loss equation #1
Comments
Please have a look at the updated NIPS version of the paper which has some typos corrected. |
Ah apologies this is not updated online yet. It will be soon. |
According to the most recent version, the losses should still be multiplied by 1/2: Why is a 1/2 missing from your code? |
I did the math from scratch and it does seem like there is agreement between code and the most recent version of paper. Apologies for the trouble. |
Here you are https://arxiv.org/pdf/1703.04977.pdf I am implementing it in pytorch. If you have done it, please share it |
@hardianlawi : I reproduced the result from keras using pytorch. YOu can look at https://colab.research.google.com/drive/1_zsmQguerz0iy0J9Uu2Cs7oEHhj0QoXH However, the sigma2 does not work. Do you have any suggestion @yaringal ? |
@John1231983 |
@Banyueqin : Thanks. But this is result after fix it
|
No. We expect result is 10 and 0. |
@hardianlawi My reuslt goes to negative number after some steps,and I think it results from log_var(sigma),would you mind give me some suggestions? |
Hi @yaringal !
I have read the paper and it is really amazing. Thanks for your team's hard-work!
However, I have a question regarding the final equation and also your keras implementation.
The equation above has 1/2 multiplied by the loss, but you didn't include it in the keras implementation.
I tried experimenting on it, and included 1/2 in the loss function, but it couldn't converge. I am wondering if the problem is in the paper or the keras implementation, because if I exclude the 1/2, it converges to the ground-truth std.
Best regards,
Hardian
The text was updated successfully, but these errors were encountered: