Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ Pretrained Models ] #3

Closed
IemProg opened this issue Dec 14, 2021 · 1 comment
Closed

[ Pretrained Models ] #3

IemProg opened this issue Dec 14, 2021 · 1 comment

Comments

@IemProg
Copy link

IemProg commented Dec 14, 2021

Hi,

Thanks for the wonderful work. Could you please, share links to the default models used for finetunning experiments ?

Specifically, the pretrained models for finetunning experiments are they trained from scratch on ImageNet1K ?
- Because the official ones published for ViTs models are trained on ImageNet21K and finetunned on ImageNet1K ?

Thanks,

@yhlleo
Copy link
Owner

yhlleo commented Dec 17, 2021

Hi @IemProg ,

In our experiments, the pretrained models are trained from scratch on ImageNet-1K.

In SwinTransformer, the authors also released the models trained from scratch on ImageNet-1K. In T2T, they used only ImageNet-1K dataset.

@yhlleo yhlleo closed this as completed Dec 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants