New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FastFCN has been supported by MMSegmentation. #106
Comments
Thanks for your implementation in MMSegmentaion. Could you please share your result about FPS? Which backbones did you experiment with? |
More details could be found here. Backbone is ResNet50, decoder head is PSPNet, EncNet and DeepLabV3. |
As shown in the paper, the advantage is more significant when using ResNet101 as the backbone. |
Why not submit to a conference? Although it is a 2019 paper, i think it is still worth a CVPR. Based on huge number of expierment I did, JPU can get simliar or better performance than dilation mode, even on Swin-Large or ConvNextXt-Large. |
Could you list your numerical results about JPU + Swin/ConvNeXt? It would be better if your experiments were based on MMSegmentation codebase. |
Sorry, I am not using mmseg atm. I will provide some results in near future. |
If you're interested, we can work on it together : ) |
OK, got it. Swin and ConvNeXt official repos were both implemented based on MMSegmentation. |
Hi, right now FastFCN has been supported by MMSegmentation. We do find using JPU with smaller feature maps from backbone could get similar or higher performance than original models with larger feature maps.
There is still something to do for us, for example, we do not find obviously improvement about FPS in our implementation, thus we would try to figure it out in the future.
Anyway, thanks for your work and hope more people from community could use FastFCN.
Best,
The text was updated successfully, but these errors were encountered: