Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Mean-Teacher and FixMatch Self-Training Scheme(s) #116

Merged
merged 8 commits into from
Apr 2, 2023

Conversation

anwai98
Copy link
Contributor

@anwai98 anwai98 commented Mar 30, 2023

  • Mean-Teacher training parameters updated
  • AdaMT training parameters updated

@anwai98 anwai98 changed the title Update Mean-Teacher Self-Training Scheme(s) Update Mean-Teacher and FixMatch Self-Training Scheme(s) Mar 31, 2023
Copy link
Owner

@constantinpape constantinpape left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I left a few comments. The most important thing to address is making sure that pseudo labels are not used in gradient computation.

# TODO
def strong_augmentations():
pass
def strong_augmentations(p=0.5):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Intuitively I would increase the probability further. With p=0.5 you still have a fairly high probability that no transformation is applied (1/2 * 3 / 4 * 1/2 = 3 / 16). Since you're running the experiments with these settings now we should probably keep it for now, but it would eventually make sense to double check if higher probabilities change the results.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay. Will increase the probability for the next set of experiments and update it likewise. Thanks!

torch_em/self_training/fix_match.py Show resolved Hide resolved
torch_em/self_training/fix_match.py Show resolved Hide resolved
torch_em/self_training/fix_match.py Outdated Show resolved Hide resolved
torch_em/self_training/fix_match.py Show resolved Hide resolved
torch_em/self_training/fix_match.py Show resolved Hide resolved

self._kwargs = kwargs

def get_distribution_alignment(self, pseudo_labels, label_threshold=0.5):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't fully go through this, and I assume you adapted it from our previous code for the distro alignment.
Still, it would be good to check this again and to give some comments / formulas here.

@anwai98
Copy link
Contributor Author

anwai98 commented Mar 31, 2023

Thanks for all the feedback! The expected changes have been taken care of (in both the self-training approaches).

Copy link
Owner

@constantinpape constantinpape left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall, but we need to keep the forward_context around the pseudo_labeler.

torch_em/self_training/fix_match.py Show resolved Hide resolved
torch_em/self_training/mean_teacher.py Outdated Show resolved Hide resolved
torch_em/self_training/mean_teacher.py Outdated Show resolved Hide resolved
torch_em/self_training/fix_match.py Outdated Show resolved Hide resolved
@anwai98
Copy link
Contributor Author

anwai98 commented Mar 31, 2023

The forward_context has been updated for generating the Pseudo Labels. Thanks!

Copy link
Owner

@constantinpape constantinpape left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, looks all good now!

@constantinpape constantinpape merged commit 944e1cf into constantinpape:main Apr 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants