You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following MoCo code, we also use CE loss for implementation. Since we put positives at 0-th locations, the GT labels should be all 0. More details can be found in Algorithm 1 of MoCo paper.
Hello,
Thanks for your great work.
I have a question:
Why do you use zeros as labels in the loss function and not the original labels?
I am talking about this part from NCE function:
labels= torch.zeros(logits.shape[0], dtype=torch.long).cuda()
The text was updated successfully, but these errors were encountered: