You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Congratulations on the great paper and on being accepted to NIPS! I have a problem with the loss function.
The code corresponding to the self-supervised learning loss function is on lines 75 and 88 of 'main.py'. The formulation of the code is as follows.
However, the formulation in the paper should be:
They are not consistent. Why is this the case?
The text was updated successfully, but these errors were encountered:
Thanks for your interest in our work!
These two implementations are equivalent. Note that Z is standardized (0-mean and 1/\sqrt{N} variance), we have || Z_A ||_2^2 = || Z_B ||_2^2 = D. And then L_Inv = || Z_A - Z_B ||_2^2 = || Z_A ||_2^2 + || Z_B ||_2^2 - 2 * Z_A^T Z_B = 2D - 2* Z_A^T Z_B, so we minimize Z_A^T * Z_B in the codes instead.
Congratulations on the great paper and on being accepted to NIPS! I have a problem with the loss function.
The code corresponding to the self-supervised learning loss function is on lines 75 and 88 of 'main.py'. The formulation of the code is as follows.
However, the formulation in the paper should be:
They are not consistent. Why is this the case?
The text was updated successfully, but these errors were encountered: