Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There seems to be something strange when the data is loading #193

Closed
MellowMemories opened this issue Feb 17, 2024 · 6 comments
Closed

There seems to be something strange when the data is loading #193

MellowMemories opened this issue Feb 17, 2024 · 6 comments

Comments

@MellowMemories
Copy link

MellowMemories commented Feb 17, 2024

image
Look at this official configuration written by USB, which means that for every training epoch, it will perform 1024 iterations, and each iteration will use 8 labeled images. Therefore, we can conclude that for each epoch, we would need 1024 * 8 = 8192 labeled images .
However, in this configuration, we only have 400 labeled images . I don’t understand this. How can it work? Is it reasonable?
By the way, I am a complete novice in deep learning and semi-supervised learning.
Thanks a lot!

Tasks

No tasks being tracked yet.
@Hhhhhhao
Copy link
Collaborator

Distributedsampler will replicate the data to fulfill training iterations in one epoch

@MellowMemories
Copy link
Author

MellowMemories commented Feb 20, 2024 via email

@AurelienGauffre
Copy link

AurelienGauffre commented Feb 28, 2024

In semi-supervised learning, figuring out what counts as an "epoch" is tricky. Classical semi-supervised methods, as implemented in this USB package, use batches that contain both labeled and unlabeled examples in a particular ratio (often called $\mu$ in the literature, or 'uratio' in USB, set to 1 in your case). Because of this mixing, the balance of labeled and unlabeled data in your batches generally doesn't match the balance of the original data set. So when you try to complete an "epoch", you'll inevitably end up going over some data points more than once, whether they're labeled or not, just to make sure the model sees everything. Even one of the FixMatch creators mentioned that they sort of just picked a way to define an epoch based roughly on how many unlabeled examples there are in CIFAR-100, which shows that you should not pay too much attention to that definition. This is also why you usually don't see the term "epoch" in the semi-supervised literature, but rather use a number of "steps".

PS : I may be wrong, but I believe that the definition of one epoch being everywhere 1024 steps in USB might originate from this FixMatch original choice on Cifar-100

@MellowMemories
Copy link
Author

Thank you very much for clarifying my doubts.

I now have a clear understanding of the code organization and program execution flow in this repository, and I have read through all the recent papers on semi-supervised learning. I have gained a preliminary understanding of the methods used in the field of semi-supervised learning: supervised loss + auxiliary loss + pseudo-labeling loss. Building upon this foundation, the 'USB' code has done an excellent job abstracting the workflow for semi-supervised learning. You and your team have done great work.

Regarding data loading, with my own practice and your guidance, I believe I have grasped it quite well. Currently, I have divided my dataset into training set, validation set, and test set in a ratio of 7:1:2. In the training set, 20% of the data is labeled while 80% is unlabeled. Since in the 'train_step' function of the program, data is loaded based on labeled data as a reference point, all I need to do is divide the size of my labeled data by 'train_batch_size' to obtain 'num_train_iters'. This ensures that each labeled data will be used once within one epoch only. Based on this method of data loading, I am also pursuing my own work.

Once again, thank you for your explanations!

@ZahraaHM
Copy link

ZahraaHM commented Mar 4, 2024

Thank you very much for clarifying my doubts.

I now have a clear understanding of the code organization and program execution flow in this repository, and I have read through all the recent papers on semi-supervised learning. I have gained a preliminary understanding of the methods used in the field of semi-supervised learning: supervised loss + auxiliary loss + pseudo-labeling loss. Building upon this foundation, the 'USB' code has done an excellent job abstracting the workflow for semi-supervised learning. You and your team have done great work.

Regarding data loading, with my own practice and your guidance, I believe I have grasped it quite well. Currently, I have divided my dataset into training set, validation set, and test set in a ratio of 7:1:2. In the training set, 20% of the data is labeled while 80% is unlabeled. Since in the 'train_step' function of the program, data is loaded based on labeled data as a reference point, all I need to do is divide the size of my labeled data by 'train_batch_size' to obtain 'num_train_iters'. This ensures that each labeled data will be used once within one epoch only. Based on this method of data loading, I am also pursuing my own work.

Once again, thank you for your explanations!

Thank you for opening this issue, it has enlightened me. As someone new to the field, I'm currently facing difficulty understanding the execution flow within this repository, particularly regarding how the label ratio is utilized in training the SSL algorithms && deciding how to choose the num_labels parameter. Is there any intuition behind this?.
It would be immensely helpful if you could provide a screenshot of the configuration used in the example you mentioned in your comment.

Additionally, I'm curious about your preferred method for running the code. Did you rely on the notebooks such as Beginner_Example.ipynb or Custom_Dataset.ipynb found in the notebooks folder, or is there a better approach?

Any guidance you can offer would be greatly appreciated. Thanks a lot.

Copy link

github-actions bot commented May 4, 2024

Stale issue message

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants