You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that in tune_fgvc/tune_vtab.py, the learning rate is scaled as 【lr = lr / 256 * cfg.DATA.BATCH_SIZE】for choosing the best learning rate while in train.py, this operation is not used (lr is kept as [5, 10, 50, etc]). I wonder the reason for doing this. Are the reported results based on the unscaled learning rate? (lr is given in your fgvc excel file)
Best,
The text was updated successfully, but these errors were encountered:
hi, thanks for the question. we use tune*.py for all of our experiments, so we set the scaled lr in the tune_vtab.py file. So the reported results are all based on the scaled learning rate.
train.py does not set the scaling operation since it is only used as a main gate way for training. If you want to use this one directly, please remember to do the scaling manually.
Hi Dr. Jia,
I noticed that in tune_fgvc/tune_vtab.py, the learning rate is scaled as 【lr = lr / 256 * cfg.DATA.BATCH_SIZE】for choosing the best learning rate while in train.py, this operation is not used (lr is kept as [5, 10, 50, etc]). I wonder the reason for doing this. Are the reported results based on the unscaled learning rate? (lr is given in your fgvc excel file)
Best,
The text was updated successfully, but these errors were encountered: