We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
self.gradInput[2] = torch.exp(-input[2]):cmul(torch.add(target,-1,input[1]):pow(2)):add(-0.5)
It seems to me that gradient updating step should be:
self.gradInput[2] = torch.exp(-input[2]):cmul(torch.add(target,-1,input[1]):pow(2)):mul(0.5):add(-0.5)
The text was updated successfully, but these errors were encountered:
c074ffa
Hmm, there is definitely something wrong but your solution is not correct.
input[1] = mu input[2] = log(sigma^2)
The forward is: -0.5 * (log(sigma) + log(2pi)) - 0.5 * (x - mu)^2/sigma^2
The backward for log(sigma^2) is:
derivative of 1/sigma^2 with respect to log(sigma^2) is: -exp(-log(sigma^2)) (using: 1/sigma^2 = exp(-log(sigma^2)))
The minus cancels the minus of - 0.5 * (x - mu)^2 and you reach the above mentioned backward.
See the commit for further details :). Thanks for pointing it out!
Sorry, something went wrong.
fixes #4
6eae75c
No branches or pull requests
self.gradInput[2] = torch.exp(-input[2]):cmul(torch.add(target,-1,input[1]):pow(2)):add(-0.5)
It seems to me that gradient updating step should be:
self.gradInput[2] = torch.exp(-input[2]):cmul(torch.add(target,-1,input[1]):pow(2)):mul(0.5):add(-0.5)
The text was updated successfully, but these errors were encountered: