Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training strategy on Imagenet #18

Closed
Ga-Lee opened this issue Jun 16, 2022 · 2 comments
Closed

Training strategy on Imagenet #18

Ga-Lee opened this issue Jun 16, 2022 · 2 comments

Comments

@Ga-Lee
Copy link

Ga-Lee commented Jun 16, 2022

Hello! Thank you for your wonderful job!
it is said that as-mlp uses the same training strategy in Swin Transformer, I want to know to what extend the work use the same strategy ? For example, does as-mlp use Mixup、Cutmix?
I saw there exists codes for mixup in function build_loader which requires config.AUG.MIXUP, but I didn't find config.AUG in the config file for models.
I want to know the detailed training setting for your models, may you do the favor for me?

@dongzelian
Copy link
Collaborator

@Ga-Lee Thanks so much!
(1) does as-mlp use Mixup、Cutmix? Yes.
(2) where is config.AUG? The config.AUG.MIXUP is here:

_C.AUG.MIXUP = 0.8

(3) detailed training setting. You can refer to https://github.com/svip-lab/AS-MLP/blob/main/data/build.py, https://github.com/svip-lab/AS-MLP/blob/main/config.py, and Sec. A.2 in Swin Transformer paper (https://arxiv.org/pdf/2103.14030.pdf).

@Ga-Lee
Copy link
Author

Ga-Lee commented Jun 16, 2022

@dongzelian
Thank you for your kindness and reply!
no more questions, and I will close the issue. : )

@Ga-Lee Ga-Lee closed this as completed Jun 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants