Replies: 2 comments
-
I am not sure if I understand your question. Negative log-likelihood is an "abstract" loss function, e.g. it will be MSE in regression (line 198 in |
Beta Was this translation helpful? Give feedback.
-
Thank you for your explanation! |
Beta Was this translation helpful? Give feedback.
-
Hi,
thank you for the great implementation of meta learning algorithms.
We are trying to evaluate the PLATIPUS algorithm and we noted that the paper requires a Negative Log Likelihood loss (Page 3, 3 Preliminaries), yet your adaptation step by default is using an MSE loss.
We were wondering if this is an adaptation of the original paper on your part or if we’re missing a crucial step where the NLL is calculated from the MSE. Later the loss we’re suspecting to be an MSE loss is logged to Tensorboard as a NLL again(
Platipus.py, 216
). This seems especially important since inPlatipus.py, 184
the gradient is calculated on this loss function. To our understanding, this gradient might differ significantly from the gradient intended in the paper since it is calculated on a different loss function.Is this just a trick in the implementation to use the MSE instead of the NLL or are we missing something in the implementation?
Thanks a lot in advance,
Leon
Beta Was this translation helpful? Give feedback.
All reactions