Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question about a part of code #7

Closed
Powercoder64 opened this issue Feb 17, 2022 · 1 comment
Closed

question about a part of code #7

Powercoder64 opened this issue Feb 17, 2022 · 1 comment

Comments

@Powercoder64
Copy link

Hello,

Thanks for your great work.

I have a question:

Why do you use zeros as labels in the loss function and not the original labels?

I am talking about this part from NCE function:

labels= torch.zeros(logits.shape[0], dtype=torch.long).cuda()

@zhang-can
Copy link
Owner

Hi @Powercoder64 ,

Thanks for your interest in our work.

Following MoCo code, we also use CE loss for implementation. Since we put positives at 0-th locations, the GT labels should be all 0. More details can be found in Algorithm 1 of MoCo paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants