-
Notifications
You must be signed in to change notification settings - Fork 549
Fine-tuning details #45
Comments
follow this issue. I have same question |
I have the same question too |
Hi @nakashima-kodai , @Zhu-haow and @Christine620, Thanks for your question, for CIFAR-10 and Cars we use:
We remove random erasing and stochastic depth. All other elements are the same as for training on ImageNet. Best, Hugo |
Hi @TouvronHugo |
Hi @nicolas-dufour, |
Hello @TouvronHugo, first of all congratulations on your great work, and thanks for replying here. I have two questions regarding to this topic:
|
Hi, a question about finetuning on CIFAR. How can one train CIFAR with 224 or 384 image size? What does 224 or 384 image size mean here? |
Hi @claverru,
I hope I have answered your questions , |
Hi @forjiuzhou, 224 or 384 image size means using image with resolution 224x224 pixels or 384x384 pixels. On CIFAR it is necessary to do an interpolation of the original images that are of size 32x32. Best, |
@TouvronHugo Hi I tried this script, could you help me verify? I found this 768 batch size will OOM even use deit_small. I have to decrease to 256 batch size.
|
@TouvronHugo Hi, Will finetune longer epoch (more than 30) get higher accuracy? Does bigger model (deit-large) needs to finetune more epochs or bigger learning rate? |
As there is no more activity on this issue I will close it but feel free to reopen it if needed. |
@TouvronHugo Hi, I'm also wondering about the training recipe for CIFAR-100 and Flowers dataset. Thanks for your help! |
Hi @jizxny , |
Hi,
I am trying to replicate the results of the paper that have been fine-tuned to datasets such as CIFAR-10 and Stanford Cars. Could you give details about hyper-parameters used (like batch size, learning rate etc.)
Thanks.
The text was updated successfully, but these errors were encountered: