New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about _train_epoch in trainer.py #11
Comments
Thank you for raising this question. Yes, you are correct at first. Looks like the epoch size is However, in the fully-supervised, |
Thanks for your answer. |
I guess you may misunderstand the config for SupOnly method. You should note that for SupOnly method, the mode should still be 'semi' rather than 'supervised', so the epoch size is still |
Ok I got it. Thank you very much! |
if self.mode == 'supervised':
# dataloader = iter(self.supervised_loader)
# tbar = tqdm(range(len(self.supervised_loader)), ncols=135)
dataloader = iter(cycle(self.supervised_loader))
tbar = tqdm(range(self.iter_per_epoch), ncols=135)
else:
dataloader = iter(zip(cycle(self.supervised_loader), cycle(self.unsupervised_loader)))
tbar = tqdm(range(self.iter_per_epoch), ncols=135)
The comment part is your original code.
In the semi-supervised method, 'cycle' was used to expand the number of iterations of labeled images. Obviously the number of iterations in the fully-supervised is much less. I think this comparison may be unfair. What is your opinion or modification plan? Looking forward to your answer, thanks!
The text was updated successfully, but these errors were encountered: