New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Query regarding input transformation #13
Comments
Hi! At that point in code the noise has been already applied. The For example, the CIFAR-10 image translations are drawn from uniform distribution [-4, 4] on both axes. Thus, on line 208, an individual image in We did not experiment using the same noise on both sides. You can explore it yourself if you want to by moving the image augmentation steps before mean-teacher/pytorch/mean_teacher/datasets.py Lines 38 to 43 in 618c844
Please let us know what happens if you end up running this experiment. I have been wondering about it and meant to do it but never actually did. |
So something like: train_transformation = transforms.Compose([
data.RandomTranslateWithReflect(4),
transforms.RandomHorizontalFlip(),
data.TransformTwice(transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(**channel_stats)
])
]) |
Thanks for your reply ! I will let you know if I end up doing the experiment. I would think that the performance of Mean Teacher won't drop drastically, since the teacher and student output would still be very different, as the parameters of the two nets are very different. Perhaps approaches like Pi-model(https://arxiv.org/pdf/1610.02242.pdf) would suffer more( where input transformation and dropout are the only sources of variability). |
Hey,
I guess input and ema_input transformed versions of the same images, right ?(
mean-teacher/pytorch/main.py
Line 208 in 618c844
If so, did you guys experiment with using the same input for both model and ema_model ? Does using the same input lead to drop in performance ?
Thanks !
The text was updated successfully, but these errors were encountered: