-
Notifications
You must be signed in to change notification settings - Fork 34
The current implementation containing bugs on dataloader. #14
Comments
Hi Pan, Thanks for pointing that out! That bug doesn’t occur if we use the spawn start method like the following:
For all experiments conducted in the paper, we use the spawn setting. We will fix the bug in the released code soon. Thanks, |
@zaiweizhang Great. Looking forward to your updated code and results. Thanks |
So calling main with |
Actually, you also need to change the setting to spawn. We have changed it in the main.py. Please try again! Sorry for the trouble. |
Ok thanks for the information. Does this bug occur across all pytorch frameworks? For example, does this happen when finetuning with OpenPCDet? |
@baraujo98 As mentioned, this is a common bug that impacts a lot of PyTorch frameworks. A blog has discussed this as well here. "The bug is easy to make. In some cases, it has minimal effect on final performance. In others, the identical augmentations can cause severe degradations." |
@baraujo98 I do not think that it will affect the fine-tuning with OpenPCDet. It's better to check their codebase. |
Ok, it looks to me like OpenPCDet does not set a fixed seed by default, hinted by the fact that a |
Feel free to reopen. Closing it for now. |
Dear authors,
I think the repo is an excellent implementation of your excellent paper. However, it shares a common bug in setting the seed number for data augmentation as pointed in here and here
I have tested in this repo and found out that each sample will only have two one-time augmented versions across the whole training epochs, which is problematic to the self-contrastive learning setting.
You may want to revisit the experiments in the paper.
Best,
Pan He
The text was updated successfully, but these errors were encountered: