Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FastFCN has been supported by MMSegmentation. #106

Closed
MengzhangLI opened this issue Sep 30, 2021 · 9 comments
Closed

FastFCN has been supported by MMSegmentation. #106

MengzhangLI opened this issue Sep 30, 2021 · 9 comments

Comments

@MengzhangLI
Copy link

MengzhangLI commented Sep 30, 2021

Hi, right now FastFCN has been supported by MMSegmentation. We do find using JPU with smaller feature maps from backbone could get similar or higher performance than original models with larger feature maps.

There is still something to do for us, for example, we do not find obviously improvement about FPS in our implementation, thus we would try to figure it out in the future.

Anyway, thanks for your work and hope more people from community could use FastFCN.

Best,

@wuhuikai
Copy link
Owner

wuhuikai commented Oct 8, 2021

Thanks for your implementation in MMSegmentaion. Could you please share your result about FPS? Which backbones did you experiment with?

@MengzhangLI
Copy link
Author

More details could be found here.

Backbone is ResNet50, decoder head is PSPNet, EncNet and DeepLabV3.

@wuhuikai
Copy link
Owner

wuhuikai commented Oct 8, 2021

As shown in the paper, the advantage is more significant when using ResNet101 as the backbone.

@edwardyehuang
Copy link

edwardyehuang commented Mar 9, 2022

Why not submit to a conference? Although it is a 2019 paper, i think it is still worth a CVPR. Based on huge number of expierment I did, JPU can get simliar or better performance than dilation mode, even on Swin-Large or ConvNextXt-Large.

@MengzhangLI
Copy link
Author

Why not submit to a conference? Although it is a 2019 paper, i think it is still worth a CVPR. Based on huge number of expierment I did, JPU can get simliar or better performance than dilation mode, even on Swin-Large or ConvNextXt-Large.

Could you list your numerical results about JPU + Swin/ConvNeXt? It would be better if your experiments were based on MMSegmentation codebase.

@edwardyehuang
Copy link

Why not submit to a conference? Although it is a 2019 paper, i think it is still worth a CVPR. Based on huge number of expierment I did, JPU can get simliar or better performance than dilation mode, even on Swin-Large or ConvNextXt-Large.

Could you list your numerical results about JPU + Swin/ConvNeXt? It would be better if your experiments were based on MMSegmentation codebase.

Sorry, I am not using mmseg atm. I will provide some results in near future.

@wuhuikai
Copy link
Owner

wuhuikai commented Mar 9, 2022

Why not submit to a conference? Although it is a 2019 paper, i think it is still worth a CVPR. Based on huge number of expierment I did, JPU can get simliar or better performance than dilation mode, even on Swin-Large or ConvNextXt-Large.

If you're interested, we can work on it together : )

@MengzhangLI
Copy link
Author

Why not submit to a conference? Although it is a 2019 paper, i think it is still worth a CVPR. Based on huge number of expierment I did, JPU can get simliar or better performance than dilation mode, even on Swin-Large or ConvNextXt-Large.

Could you list your numerical results about JPU + Swin/ConvNeXt? It would be better if your experiments were based on MMSegmentation codebase.

Sorry, I am not using mmseg atm. I will provide some results in near future.

OK, got it. Swin and ConvNeXt official repos were both implemented based on MMSegmentation.

@MengzhangLI
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants