You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an excellent work, but i have several question about the code:
in training/train.py Line32. the dimension of q and k are supposed to be (n,c). I think the right code is l_pos = torch.einsum('nc,nc->n', [q, k]).unsqueeze(-1)
the update memory bank seems not correspond to the original paper( sec3.3, eq5). I am confused about torch.mul(torch.mean(torch.mul(p_qd,l_neg),dim=0),d_norm) in Line 63, and line64 torch.div(g,torch.norm(d,dim=0))
Kindly hope you can help me understand. Thank you very much.
The text was updated successfully, but these errors were encountered:
Thanks a lot for your interest in our work!
1 First, we used the matrix multiplying in a different way in the implementation, where the other examples in the same batch are also regarded as negative in our implementation.
2 This is a time-efficient implementation, it is the same as that in our paper. Instead, you can also simply use the SGD optimizer to optimize the memory bank by taking the negative contrastive loss with a smaller temperature.
Thanks a lot for your interest in our work!
1 First, we used the matrix multiplying in a different way in the implementation, where the other examples in the same batch are also regarded as negative in our implementation.
2 This is a time-efficient implementation, it is the same as that in our paper. Instead, you can also simply use the SGD optimizer to optimize the memory bank by taking the negative contrastive loss with a smaller temperature.
This is an excellent work, but i have several question about the code:
l_pos = torch.einsum('nc,nc->n', [q, k]).unsqueeze(-1)
torch.mul(torch.mean(torch.mul(p_qd,l_neg),dim=0),d_norm)
in Line 63, and line64torch.div(g,torch.norm(d,dim=0))
Kindly hope you can help me understand. Thank you very much.
The text was updated successfully, but these errors were encountered: