Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to get the reported mAP? #31

Closed
Wuzimeng opened this issue Jan 1, 2023 · 4 comments
Closed

how to get the reported mAP? #31

Wuzimeng opened this issue Jan 1, 2023 · 4 comments

Comments

@Wuzimeng
Copy link

Wuzimeng commented Jan 1, 2023

Hi, I trained the model with bs 1 and lr 0.0006, but only got mAP = 85.79%, so I wonder how to approach the performance you mentioned in your paper? Do I need to fix any other configs?

@serend1p1ty
Copy link
Owner

Set batchsize=5 can reproduce the mAP reported in the paper.

If your GPU only supports batchsize=1, try to fix BatchNorm in training.

Example:

def fix_bn(m):
    if isinstance(m, nn.BatchNorm1d) or isinstance(m, nn.BatchNorm2d):
        m.track_running_stats = False
model.apply(fix_bn)

@Wuzimeng
Copy link
Author

Thanks a lot. I tried this modification but only got mAP around 85% with bs = 1/2/3/4, any other suggestions?

@serend1p1ty
Copy link
Owner

If you use batchsize=4, you needn't fix BatchNorm, as the batchsize is big enough. According to my experience, the performance gap between batchsize=4 and batchsize=5 is within 1% (If I remember correctly).

@Wuzimeng
Copy link
Author

Wuzimeng commented Feb 3, 2023

Thanks, finally I made it. After changing the version, I got the desired result. I think the misalignment might be because I modified code loss_oim = F.cross_entropy(projected, label, ignore_index=5554) and used a high pytorch version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants