You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, Firstly thank you for the awesome work!
I had a question in the Pytorch_Wasserstein.ipynb:
In the WassersteinLossVanilla, why is it self.K = torch.exp(-self.cost/self.lam) ?
Shouldn't it be self.K = torch.exp(-self.cost*self.lam)?
In mocha also it is the above https://github.com/pluskid/Mocha.jl/blob/5e15b882d7dd615b0c5159bb6fde2cc040b2d8ee/src/layers/wasserstein-loss.jl#L33
Have you changed it because "Note that we use a different convention for $\lambda$ (i.e. we use $\lambda$ as the weight for the regularisation, later versions of the above use $\lambda^-1$ as the weight)." ?
Also what is the reason for the above?
The text was updated successfully, but these errors were encountered:
Hi,
They're mostly equivalent, to me the more natural view seemed to be lambda as the same unit as cost.
In the meantime I revised the notebook to feature a kernel and added a writeup of my maths.
Hello, Firstly thank you for the awesome work!
I had a question in the Pytorch_Wasserstein.ipynb:
In the WassersteinLossVanilla, why is it
self.K = torch.exp(-self.cost/self.lam)
?Shouldn't it be
self.K = torch.exp(-self.cost*self.lam)
?In mocha also it is the above
https://github.com/pluskid/Mocha.jl/blob/5e15b882d7dd615b0c5159bb6fde2cc040b2d8ee/src/layers/wasserstein-loss.jl#L33
Have you changed it because "Note that we use a different convention for$\lambda$ (i.e. we use $\lambda$ as the weight for the regularisation, later versions of the above use $\lambda^-1$ as the weight)." ?
Also what is the reason for the above?
The text was updated successfully, but these errors were encountered: