-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Mean-Teacher and FixMatch Self-Training Scheme(s) #116
Conversation
anwai98
commented
Mar 30, 2023
- Mean-Teacher training parameters updated
- AdaMT training parameters updated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I left a few comments. The most important thing to address is making sure that pseudo labels are not used in gradient computation.
# TODO | ||
def strong_augmentations(): | ||
pass | ||
def strong_augmentations(p=0.5): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Intuitively I would increase the probability further. With p=0.5
you still have a fairly high probability that no transformation is applied (1/2 * 3 / 4 * 1/2 = 3 / 16). Since you're running the experiments with these settings now we should probably keep it for now, but it would eventually make sense to double check if higher probabilities change the results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay. Will increase the probability for the next set of experiments and update it likewise. Thanks!
|
||
self._kwargs = kwargs | ||
|
||
def get_distribution_alignment(self, pseudo_labels, label_threshold=0.5): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't fully go through this, and I assume you adapted it from our previous code for the distro alignment.
Still, it would be good to check this again and to give some comments / formulas here.
Thanks for all the feedback! The expected changes have been taken care of (in both the self-training approaches). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall, but we need to keep the forward_context
around the pseudo_labeler.
The |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, looks all good now!