-
Notifications
You must be signed in to change notification settings - Fork 149
the mIoU in ade20k is not good like in the ade20k benchmark #16
Comments
I used the model_final_45388b.pkl and maskformer_swin_large_IN21k_384_bs16_160k_res640.yaml |
We follow Swin Transformer setting to train our model in both train and val set, but that won't give you +9 mIoU difference. There might be something wrong when you save predictions. |
@bowenc0221 |
model_final_45388b.pkl is the best model trained on train set, and the model we used to evaluate on the ADE20K benchmark is trained on train + val set following Swin Transformer.
You didn't use multi-scale inference. |
@bowenc0221 |
@bowenc0221 |
thank you for your great work.
However, the mIoU i test with the maskformer_swin_large_IN21k_384_bs16_160k_res640.yaml and model is only 0.4022,
but your mIoU in ade20k benchmark is 0.4967, which is much better.
Could you please tell me why?
thank you.
The text was updated successfully, but these errors were encountered: