Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The problem about FPS #6

Closed
MrLinNing opened this issue Feb 11, 2019 · 10 comments
Closed

The problem about FPS #6

MrLinNing opened this issue Feb 11, 2019 · 10 comments

Comments

@MrLinNing
Copy link

Hi @ycszen
The FPS about BiSeNet in paper abstract is tested on a 2048x1024 input image is 105.

But, I just get 2 FPS about BiSeNet(Xception) and 9.5 FPS about BiSeNet(ResNet-18) on TiTan Xp.

@yu-changqian
Copy link
Owner

Hi,
If you focus on the speed experiments, you should use the experiments with the suffix .speed, like the folder cityscapes.bisenet.R18.speed. Then, as we elaborated in our paper, in the speed experiments, we first resize the 1024x2048 image into 768x1536 input size.
Finally, I have to mention that the speed of Depthwise Conv in PyTorch has a little problem. Its speed is slower than the normal convolution, which results in the speed of our Xception is not accurate in this repo. More details about the depthwise conv in PyTorch can be found here.

@MrLinNing
Copy link
Author

Thanks, @ycszen
By the way, can you share the 960x720 CamVid dataset and usage instruction? I can just find the 480x360 resolution( https://github.com/alexgkendall/SegNet-Tutorial/tree/master/CamVid)

@Reagan1311
Copy link

So currently, are there any good solutions to compute the accurate FPS? @ycszen
Looking forward to your reply!

@MrLinNing
Copy link
Author

Hi, @ycszen
Why you use the cityscapes.bisenet.R18.speed folder rather than cityscapes.bisenet.R18 ?
I found that the network.py is different.
In the cityscapes.bisenet.R18.speed/network.py, you drop two srtucture where

        if is_training:
            heads = [BiSeNetHead(conv_channel, out_planes, 2,
                                 True, norm_layer),
                     BiSeNetHead(conv_channel, out_planes, 1,
                                 True, norm_layer),
                     BiSeNetHead(conv_channel * 2, out_planes, 1,
                                 False, norm_layer)]
        else:
            heads = [None, None,
                     BiSeNetHead(conv_channel * 2, out_planes, 1,
                                 False, norm_layer)]

I wonder know why you do that?

@Reagan1311 Besides, I test the cityscapes.bisenet.R18.speed/network.py in TITAN xp, the FPS is ~30 on 1024x2048, and the model output shape is 1x19x128x256.
And in the cityscapes.bisenet.R18/network.py, the FPS is ~25 on 1024x2048 and the output shape is 1x19x1024x2048.

Looking forward to your reply !

@JingliangGao
Copy link

JingliangGao commented Mar 5, 2019

Hi,
I also have the same question and look forward to your reply . @ycszen

@Reagan1311
Copy link

Reagan1311 commented Mar 5, 2019

That's still a large gap from the original paper's FPS. @MrLinNing

which is:
Xception39 105.8
Res18 65.5
(Titan XP with 2048x1024 resolution input)

How these papers figure out the accurate FPS? Really curious...

@yu-changqian
Copy link
Owner

Thanks for all attention.
In this repo, it is just a pytorch-version reimplementation of our proposed method. Actually, we implement this method with our own framework when I submitted my paper, in which the depthwise conv is correctly optimized.
However, in PyTorch, the depthwise cov is not correctly optimized, which is even slower than the original conv. Therefore, there is a gap between this repo and our paper.
Maybe the official PyTorch will support the optimized depthwise conv.
Or I can implement the optimized depthwise conv in PyTorch if I have time ...

@yu-changqian
Copy link
Owner

Hi, @ycszen
Why you use the cityscapes.bisenet.R18.speed folder rather than cityscapes.bisenet.R18 ?
I found that the network.py is different.
In the cityscapes.bisenet.R18.speed/network.py, you drop two srtucture where

        if is_training:
            heads = [BiSeNetHead(conv_channel, out_planes, 2,
                                 True, norm_layer),
                     BiSeNetHead(conv_channel, out_planes, 1,
                                 True, norm_layer),
                     BiSeNetHead(conv_channel * 2, out_planes, 1,
                                 False, norm_layer)]
        else:
            heads = [None, None,
                     BiSeNetHead(conv_channel * 2, out_planes, 1,
                                 False, norm_layer)]

I wonder know why you do that?

@Reagan1311 Besides, I test the cityscapes.bisenet.R18.speed/network.py in TITAN xp, the FPS is ~30 on 1024x2048, and the model output shape is 1x19x128x256.
And in the cityscapes.bisenet.R18/network.py, the FPS is ~25 on 1024x2048 and the output shape is 1x19x1024x2048.

Looking forward to your reply !

We do this to accelerate the inference speed.
Because both structures are auxiliary modules which are designed for better training, we can drop them in the inference phase.

@JingliangGao
Copy link

JingliangGao commented Jul 5, 2019 via email

@chenwydj
Copy link

chenwydj commented Sep 9, 2019

Thanks, @ycszen
By the way, can you share the 960x720 CamVid dataset and usage instruction? I can just find the 480x360 resolution( https://github.com/alexgkendall/SegNet-Tutorial/tree/master/CamVid)

Hi! Are you able to get the 960x720 CamVid dataset? Thanks!
@ycszen Could you comment on the resolution of CamVid dataset? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants