Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BiSeNetv2 implementation difference leads to strange results #48

Open
wasupandceacar opened this issue Nov 1, 2022 · 0 comments
Open

Comments

@wasupandceacar
Copy link

wasupandceacar commented Nov 1, 2022

Hello, I train BiSeNetv2 in my custom dataset and find probability maps have a grid-like look, which is very unnatural.

After further observation I find you use PixelShuffle on the last upsampling layer, which leads to the grid results(seems like you also use it in v1). But in BiSeNetv2 paper, the author uses simple bilinear interpolation to upsample results, which gives smoothing probability maps in my experiment.
why use PixelShuffle here?

So what's the idea of using PixelShuffle to upsample final results? Also, don't see an obvious improvement on validate score by using it tho.

By the way, there are some differences between your implementation and the paper on Gather-and-Expansion Layer. Cannot figure out if this will affect training results significantly.

@wasupandceacar wasupandceacar changed the title BiSeNetv2 implementation difference BiSeNetv2 implementation difference lead to strange results Nov 1, 2022
@wasupandceacar wasupandceacar changed the title BiSeNetv2 implementation difference lead to strange results BiSeNetv2 implementation difference leads to strange results Nov 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant