You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that some hyper-parameter settings in the code (train_script.sh) are inconsistent with those in the paper, for example, learning rate (0.027 vs. 0.08), loss weight \lambda_{2} (0.05 vs. 0.03), batch size (900 vs. 1024), milestones (48 & 64 vs. 30 & 40), epoch number (50 vs. 80) and lr decay (0.2 vs. 0.1).
Of course these numbers are adjustable but important in the experiments. I want to know how to set these hyper-parameters with backbone mobilenetv2 to get a good performance as yours.
The text was updated successfully, but these errors were encountered:
The checkpoint we released is trained by the setting described in the paper. The default hyper-parameter in the code is something we tried recently for exploring more parameter tuning. I'll change the default back to those in the paper.
It seems that some hyper-parameter settings in the code (train_script.sh) are inconsistent with those in the paper, for example, learning rate (0.027 vs. 0.08), loss weight \lambda_{2} (0.05 vs. 0.03), batch size (900 vs. 1024), milestones (48 & 64 vs. 30 & 40), epoch number (50 vs. 80) and lr decay (0.2 vs. 0.1).
Of course these numbers are adjustable but important in the experiments. I want to know how to set these hyper-parameters with backbone mobilenetv2 to get a good performance as yours.
The text was updated successfully, but these errors were encountered: