-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There seems to be something strange when the data is loading #193
Comments
Distributedsampler will replicate the data to fulfill training iterations in one epoch |
I am training in a single-machine, single-GPU environment. With the default configuration, I am using DistributedSampler to load data. However, the expected behavior is to iterate through n labeled data points and ulabel_ratio * n unlabeled data points within one epoch. Upon inspecting the code of DistributedSampler, I found that it iterates through batch_size * train_iters_per_epoch labeled data points in a single epoch, which far exceeds the number of labeled data points I have. Does this imply that some data points are being reused? Is this reasonable in a semi-supervised classification task? From my limited understanding, all data points should only be iterated over once in a single epoch. Thank you very much for your time and attention.
|
In semi-supervised learning, figuring out what counts as an "epoch" is tricky. Classical semi-supervised methods, as implemented in this USB package, use batches that contain both labeled and unlabeled examples in a particular ratio (often called PS : I may be wrong, but I believe that the definition of one epoch being everywhere 1024 steps in USB might originate from this FixMatch original choice on Cifar-100 |
Thank you very much for clarifying my doubts. I now have a clear understanding of the code organization and program execution flow in this repository, and I have read through all the recent papers on semi-supervised learning. I have gained a preliminary understanding of the methods used in the field of semi-supervised learning: supervised loss + auxiliary loss + pseudo-labeling loss. Building upon this foundation, the 'USB' code has done an excellent job abstracting the workflow for semi-supervised learning. You and your team have done great work. Regarding data loading, with my own practice and your guidance, I believe I have grasped it quite well. Currently, I have divided my dataset into training set, validation set, and test set in a ratio of 7:1:2. In the training set, 20% of the data is labeled while 80% is unlabeled. Since in the 'train_step' function of the program, data is loaded based on labeled data as a reference point, all I need to do is divide the size of my labeled data by 'train_batch_size' to obtain 'num_train_iters'. This ensures that each labeled data will be used once within one epoch only. Based on this method of data loading, I am also pursuing my own work. Once again, thank you for your explanations! |
Thank you for opening this issue, it has enlightened me. As someone new to the field, I'm currently facing difficulty understanding the execution flow within this repository, particularly regarding how the label ratio is utilized in training the SSL algorithms && deciding how to choose the num_labels parameter. Is there any intuition behind this?. Additionally, I'm curious about your preferred method for running the code. Did you rely on the notebooks such as Beginner_Example.ipynb or Custom_Dataset.ipynb found in the notebooks folder, or is there a better approach? Any guidance you can offer would be greatly appreciated. Thanks a lot. |
Stale issue message |
Look at this official configuration written by USB, which means that for every training epoch, it will perform 1024 iterations, and each iteration will use 8 labeled images. Therefore, we can conclude that for each epoch, we would need 1024 * 8 = 8192 labeled images .
However, in this configuration, we only have 400 labeled images . I don’t understand this. How can it work? Is it reasonable?
By the way, I am a complete novice in deep learning and semi-supervised learning.
Thanks a lot!
Tasks
The text was updated successfully, but these errors were encountered: