You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ask because in your code, you have a comment that says "# gather features from all gpus" but if I'm not mistaken I don't actually see where the features are gathered across all GPUs:
Hi, thank you for the great work Jianwei! I was wondering, for distributed training, do you:
OR
I've seen implementations of contrastive pretraining methods such as this one (SimCLR) do the 1st option:
https://github.com/Spijkervet/SimCLR/blob/cd85c4366d2e6ac1b0a16798b76ac0a2c8a94e58/simclr/modules/gather.py#L5
I ask because in your code, you have a comment that says "# gather features from all gpus" but if I'm not mistaken I don't actually see where the features are gathered across all GPUs:
UniCL/main.py
Line 177 in 4f680ff
Thanks!
The text was updated successfully, but these errors were encountered: