Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImageNet pre-trained HAT-L models without any fine-tuning. #35

Closed
USTC-JialunPeng opened this issue Dec 19, 2022 · 4 comments
Closed

ImageNet pre-trained HAT-L models without any fine-tuning. #35

USTC-JialunPeng opened this issue Dec 19, 2022 · 4 comments

Comments

@USTC-JialunPeng
Copy link

Hi, thank you for your great work! I want to test the performance of ImageNet pre-trained HAT-L models without fine-tuning on DF2K. When pre-training HAT-L on the ImageNet dataset, the training time is long (about 13 days using 8 V100 GPUs). Could you please release those ImageNet pre-trained HAT-L models without any fine-tuning? Thanks for replying!

@chxy95
Copy link
Member

chxy95 commented Dec 21, 2022

@USTC-JialunPeng I'm not sure if the files are still preserved. I will check it these two days.

@USTC-JialunPeng
Copy link
Author

@chxy95 Thanks for your help!

@chxy95
Copy link
Member

chxy95 commented Dec 29, 2022

@USTC-JialunPeng BaiduYun access code: vacr

@USTC-JialunPeng
Copy link
Author

@chxy95 Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants