Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Did you use random augment as strong aug as claimed in the paper? #4

Closed
CoinCheung opened this issue Jan 28, 2020 · 9 comments
Closed

Comments

@CoinCheung
Copy link

Hi,

Thanks for bring the work to us. I am going through the code you released, and I noticed that the augmentations are assigned with an argment of augment whose default value is d.d.d. Does it mean that in this repo you use default aug methods for both strong and weak augments for the unlabeled samples ?

@david-berthelot
Copy link
Collaborator

Hello, in fixmatch CTAugment is done in addition to d.d.d. I've updated the README to explain a bit more: https://github.com/google-research/fixmatch#flags

@CoinCheung
Copy link
Author

So for fixmatch, CTAugment is only applied to stronly augmented unlabeled samples(after the default augments), while labeled and weakly augmented unlabeled samples only employ default augments(flip and random crop), is that true ?

@wangxu-scu
Copy link

Hi,
thanks for the amazing work!! I have the same concern about the CTAugment application. The paper describes that the labeled data are only performed weak augmentation strategy, while in the code, it seems that the CTAugment strategy is also applied to the labeled samples as well as the evaluating samples (https://github.com/google-research/fixmatch/blob/master/libml/augment.py#L316 and https://github.com/google-research/fixmatch/blob/master/libml/augment.py#L317). Do I understand it in the wrong way? Thanks for your fascinating work again! Looking forward to your reply.

@david-berthelot
Copy link
Collaborator

The code you pointed to is used for fully supervised augmentation, for FixMatch it is overloaded
https://github.com/google-research/fixmatch/blob/master/fixmatch.py#L38

Sorry, the code is a bit complicated. Thanks for asking the question.

@wangxu-scu
Copy link

The code you pointed to is used for fully supervised augmentation, for FixMatch it is overloaded
https://github.com/google-research/fixmatch/blob/master/fixmatch.py#L38

Sorry, the code is a bit complicated. Thanks for asking the question.

Thanks for your reply~~
So the code https://github.com/google-research/fixmatch/blob/master/fixmatch.py#L38 means that the CTAugment is applied to evaluation samples. Do I understand correct?

@david-berthelot
Copy link
Collaborator

No it is applied to labeled images used as probes. I prefer not to use the word evaluation since it could be confused with the testing phase / model evaluation.

CTAugment takes a labeled image and strongly augments it. We call this a probe. We uses the model prediction on it to adjust the augmentation internal parameters.

@wangxu-scu
Copy link

No it is applied to labeled images used as probes. I prefer not to use the word evaluation since it could be confused with the testing phase / model evaluation.

CTAugment takes a labeled image and strongly augments it. We call this a probe. We uses the model prediction on it to adjust the augmentation internal parameters.

Thanks very much!

@CoinCheung
Copy link
Author

So there will be four sorts of training samples in one iter, the weakly augmented labeled samples and the strongly augmented unlabeled samples will be used to train the model(compute loss). The strongly augmented labeled samples will be used to update CTAugment weights, and the weakly augmented unlabled samples will be used to generated the psuedo labels for computing the unlabeled loss. Am I correct on this?

@david-berthelot
Copy link
Collaborator

Yes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants