Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the loss the inconsistency between the loss function of the code and the paper #1

Closed
cjissmart opened this issue Nov 29, 2021 · 4 comments

Comments

@cjissmart
Copy link

Congratulations on the great paper and on being accepted to NIPS! I have a problem with the loss function.
The code corresponding to the self-supervised learning loss function is on lines 75 and 88 of 'main.py'. The formulation of the code is as follows.
image
image
However, the formulation in the paper should be:
image
image
They are not consistent. Why is this the case?

@hengruizhang98
Copy link
Owner

hengruizhang98 commented Nov 29, 2021

Thanks for your interest in our work!
These two implementations are equivalent. Note that Z is standardized (0-mean and 1/\sqrt{N} variance), we have || Z_A ||_2^2 = || Z_B ||_2^2 = D. And then L_Inv = || Z_A - Z_B ||_2^2 = || Z_A ||_2^2 + || Z_B ||_2^2 - 2 * Z_A^T Z_B = 2D - 2* Z_A^T Z_B, so we minimize Z_A^T * Z_B in the codes instead.

@cjissmart
Copy link
Author

I find the reason in E.1 of Appendix.

@hengruizhang98
Copy link
Owner

Yes, we also explain that in the Appendix.

@cjissmart
Copy link
Author

Thanks for the reply !!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants