You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I trained the model with bs 1 and lr 0.0006, but only got mAP = 85.79%, so I wonder how to approach the performance you mentioned in your paper? Do I need to fix any other configs?
The text was updated successfully, but these errors were encountered:
If you use batchsize=4, you needn't fix BatchNorm, as the batchsize is big enough. According to my experience, the performance gap between batchsize=4 and batchsize=5 is within 1% (If I remember correctly).
Thanks, finally I made it. After changing the version, I got the desired result. I think the misalignment might be because I modified code loss_oim = F.cross_entropy(projected, label, ignore_index=5554) and used a high pytorch version.
Hi, I trained the model with bs 1 and lr 0.0006, but only got mAP = 85.79%, so I wonder how to approach the performance you mentioned in your paper? Do I need to fix any other configs?
The text was updated successfully, but these errors were encountered: