New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A question about the "labels" #29
Comments
From my understanding, 'labels' as a variable name is confusing because it actually refers to the 'class' of the positive pairs. N : size of the batch logits will be a (N_VIEW x N, N_VIEW x N - 1) matrix. All the N positives pairs' Cosine Similarity are stored in logits's first column. According to pytorch documentation
So really, input : logits (N_VIEW x N, N_VIEW x N - 2) target : labels (N_VIEW) Tensor According to the pytorch CrossEntropyLoss formula, |
I see! It's my fault. Thank you very much! |
Hi! I'd like to confirm that N_VIEW == 2 as in the paper and the default args in the code. If N_VIEW > 2, with logits.shape = (N_VIEW x N, N_VIEW x N - 1), N_VIEW x N - 1 contains at least one more positive pairs (except the one indexed with 0) which will be treated as negative pairs. |
Yes. I think in that case, the loss function needs to be modified too. |
请问你解决这个问题了吗,我也想增加视角数。谢谢! Have you solved the problem,I have the same question with you, |
Sorry, I did not try multi-view version... |
Hi! I have a question about the definition of "labels" in the script "simclr.py".
On line 54 of "simclr.py", the authors defined:
labels = torch.zeros(logits.shape[0], dtype=torch.long).to(self.args.device)
So all the entries of "labels" are all zeros. But I think according to the paper, there should be an entry as 1 for the positive pair?
Thanks in advance for your reply!
The text was updated successfully, but these errors were encountered: